Test Report: Docker_Linux_crio 16899

                    
                      f8194aff3a7b98ea29a2e4b2da65132feb1e4119:2023-07-17:30190
                    
                

Test fail (6/304)

Order failed test Duration
25 TestAddons/parallel/Ingress 158.19
102 TestFunctional/parallel/License 0.53
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 181.6
204 TestMultiNode/serial/PingHostFrom2Pods 2.97
225 TestRunningBinaryUpgrade 118.12
233 TestStoppedBinaryUpgrade/Upgrade 122.39
x
+
TestAddons/parallel/Ingress (158.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-759450 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-759450 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-759450 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [eaba1533-418f-4b22-8eb7-925c0f11cab1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [eaba1533-418f-4b22-8eb7-925c0f11cab1] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.006536871s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-759450 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.098185959s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-759450 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-759450 addons disable ingress --alsologtostderr -v=1: (7.623043808s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-759450
helpers_test.go:235: (dbg) docker inspect addons-759450:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830",
	        "Created": "2023-07-17T21:58:44.562705933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227241,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T21:58:44.852891949Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830/hosts",
	        "LogPath": "/var/lib/docker/containers/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830-json.log",
	        "Name": "/addons-759450",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-759450:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-759450",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/17b9c9fadcf73ec248d5a1e80bcc52bb12954589e9686ab8b87f754dad745810-init/diff:/var/lib/docker/overlay2/08d413eb0908d02df131d41f2ca629e52ff8a5bbd0c0c3f9b2a348a71c834d30/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17b9c9fadcf73ec248d5a1e80bcc52bb12954589e9686ab8b87f754dad745810/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17b9c9fadcf73ec248d5a1e80bcc52bb12954589e9686ab8b87f754dad745810/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17b9c9fadcf73ec248d5a1e80bcc52bb12954589e9686ab8b87f754dad745810/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-759450",
	                "Source": "/var/lib/docker/volumes/addons-759450/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-759450",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-759450",
	                "name.minikube.sigs.k8s.io": "addons-759450",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e50580269d9037372e3730b200673f22d729243f0a3de744f24dde0f25049bea",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e50580269d90",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-759450": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1ac74dbcb5be",
	                        "addons-759450"
	                    ],
	                    "NetworkID": "0a54d91ae2143d3c34da9f41747dabbdf7cbaa5cc99ee7c23379a296d9329b2c",
	                    "EndpointID": "6c7ebaff83f670e744d8787d71f428e218f4d074eb4cdf25a26b68ba0dfa538d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-759450 -n addons-759450
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-759450 logs -n 25: (1.155439273s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-151003   | jenkins | v1.31.0 | 17 Jul 23 21:57 UTC |                     |
	|         | -p download-only-151003        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-151003   | jenkins | v1.31.0 | 17 Jul 23 21:57 UTC |                     |
	|         | -p download-only-151003        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 21:58 UTC |
	| delete  | -p download-only-151003        | download-only-151003   | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 21:58 UTC |
	| delete  | -p download-only-151003        | download-only-151003   | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 21:58 UTC |
	| start   | --download-only -p             | download-docker-750030 | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC |                     |
	|         | download-docker-750030         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-750030      | download-docker-750030 | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 21:58 UTC |
	| start   | --download-only -p             | binary-mirror-596153   | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC |                     |
	|         | binary-mirror-596153           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35365         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-596153        | binary-mirror-596153   | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 21:58 UTC |
	| start   | -p addons-759450               | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 21:58 UTC | 17 Jul 23 22:00 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:00 UTC | 17 Jul 23 22:00 UTC |
	|         | -p addons-759450               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:00 UTC | 17 Jul 23 22:00 UTC |
	|         | addons-759450                  |                        |         |         |                     |                     |
	| addons  | addons-759450 addons           | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:00 UTC | 17 Jul 23 22:00 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-759450 ip               | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:00 UTC | 17 Jul 23 22:00 UTC |
	| addons  | addons-759450 addons disable   | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:00 UTC | 17 Jul 23 22:00 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:00 UTC | 17 Jul 23 22:01 UTC |
	|         | addons-759450                  |                        |         |         |                     |                     |
	| addons  | addons-759450 addons disable   | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:01 UTC | 17 Jul 23 22:01 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ssh     | addons-759450 ssh curl -s      | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:01 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-759450 addons           | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:01 UTC | 17 Jul 23 22:01 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-759450 addons           | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:01 UTC | 17 Jul 23 22:01 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-759450 ip               | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC | 17 Jul 23 22:03 UTC |
	| addons  | addons-759450 addons disable   | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC | 17 Jul 23 22:03 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-759450 addons disable   | addons-759450          | jenkins | v1.31.0 | 17 Jul 23 22:03 UTC | 17 Jul 23 22:03 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:58:23
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:58:23.906365  226587 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:58:23.906506  226587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:58:23.906516  226587 out.go:309] Setting ErrFile to fd 2...
	I0717 21:58:23.906520  226587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:58:23.906802  226587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 21:58:23.907522  226587 out.go:303] Setting JSON to false
	I0717 21:58:23.908619  226587 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6048,"bootTime":1689625056,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:58:23.908679  226587 start.go:138] virtualization: kvm guest
	I0717 21:58:23.911657  226587 out.go:177] * [addons-759450] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:58:23.913902  226587 notify.go:220] Checking for updates...
	I0717 21:58:23.914230  226587 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 21:58:23.916074  226587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:58:23.917851  226587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 21:58:23.919573  226587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 21:58:23.921519  226587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 21:58:23.923454  226587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 21:58:23.925485  226587 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:58:23.947851  226587 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:58:23.947985  226587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:58:24.007717  226587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 21:58:23.997221113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:58:24.007941  226587 docker.go:294] overlay module found
	I0717 21:58:24.010232  226587 out.go:177] * Using the docker driver based on user configuration
	I0717 21:58:24.011769  226587 start.go:298] selected driver: docker
	I0717 21:58:24.011787  226587 start.go:880] validating driver "docker" against <nil>
	I0717 21:58:24.011800  226587 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 21:58:24.012593  226587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:58:24.065976  226587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 21:58:24.057387287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:58:24.066176  226587 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:58:24.066368  226587 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 21:58:24.068660  226587 out.go:177] * Using Docker driver with root privileges
	I0717 21:58:24.070450  226587 cni.go:84] Creating CNI manager for ""
	I0717 21:58:24.070472  226587 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:58:24.070480  226587 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 21:58:24.070495  226587 start_flags.go:319] config:
	{Name:addons-759450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-759450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:58:24.072485  226587 out.go:177] * Starting control plane node addons-759450 in cluster addons-759450
	I0717 21:58:24.073918  226587 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:58:24.075483  226587 out.go:177] * Pulling base image ...
	I0717 21:58:24.077013  226587 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:58:24.077051  226587 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 21:58:24.077070  226587 cache.go:57] Caching tarball of preloaded images
	I0717 21:58:24.077071  226587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:58:24.077148  226587 preload.go:174] Found /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 21:58:24.077161  226587 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 21:58:24.077473  226587 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/config.json ...
	I0717 21:58:24.077515  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/config.json: {Name:mk03e014fbfe6a9b28a8a5c552fbe7d329e1e10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:24.093496  226587 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:58:24.093617  226587 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 21:58:24.093636  226587 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 21:58:24.093644  226587 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 21:58:24.093650  226587 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 21:58:24.093658  226587 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0717 21:58:35.618661  226587 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0717 21:58:35.618704  226587 cache.go:195] Successfully downloaded all kic artifacts
	I0717 21:58:35.618743  226587 start.go:365] acquiring machines lock for addons-759450: {Name:mk2bc08f70fe92afc29d47809a61bca84bf7d8a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 21:58:35.618841  226587 start.go:369] acquired machines lock for "addons-759450" in 77.508µs
	I0717 21:58:35.618864  226587 start.go:93] Provisioning new machine with config: &{Name:addons-759450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-759450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:58:35.618976  226587 start.go:125] createHost starting for "" (driver="docker")
	I0717 21:58:35.621027  226587 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 21:58:35.621293  226587 start.go:159] libmachine.API.Create for "addons-759450" (driver="docker")
	I0717 21:58:35.621325  226587 client.go:168] LocalClient.Create starting
	I0717 21:58:35.621426  226587 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem
	I0717 21:58:35.799220  226587 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem
	I0717 21:58:35.864404  226587 cli_runner.go:164] Run: docker network inspect addons-759450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 21:58:35.880576  226587 cli_runner.go:211] docker network inspect addons-759450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 21:58:35.880710  226587 network_create.go:281] running [docker network inspect addons-759450] to gather additional debugging logs...
	I0717 21:58:35.880730  226587 cli_runner.go:164] Run: docker network inspect addons-759450
	W0717 21:58:35.896536  226587 cli_runner.go:211] docker network inspect addons-759450 returned with exit code 1
	I0717 21:58:35.896581  226587 network_create.go:284] error running [docker network inspect addons-759450]: docker network inspect addons-759450: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-759450 not found
	I0717 21:58:35.896597  226587 network_create.go:286] output of [docker network inspect addons-759450]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-759450 not found
	
	** /stderr **
	I0717 21:58:35.896660  226587 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:58:35.912373  226587 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011fede0}
	I0717 21:58:35.912426  226587 network_create.go:123] attempt to create docker network addons-759450 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 21:58:35.912482  226587 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-759450 addons-759450
	I0717 21:58:35.966709  226587 network_create.go:107] docker network addons-759450 192.168.49.0/24 created
	I0717 21:58:35.966745  226587 kic.go:117] calculated static IP "192.168.49.2" for the "addons-759450" container
	I0717 21:58:35.966847  226587 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 21:58:35.981281  226587 cli_runner.go:164] Run: docker volume create addons-759450 --label name.minikube.sigs.k8s.io=addons-759450 --label created_by.minikube.sigs.k8s.io=true
	I0717 21:58:35.998210  226587 oci.go:103] Successfully created a docker volume addons-759450
	I0717 21:58:35.998310  226587 cli_runner.go:164] Run: docker run --rm --name addons-759450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-759450 --entrypoint /usr/bin/test -v addons-759450:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 21:58:39.610169  226587 cli_runner.go:217] Completed: docker run --rm --name addons-759450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-759450 --entrypoint /usr/bin/test -v addons-759450:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (3.611797452s)
	I0717 21:58:39.610268  226587 oci.go:107] Successfully prepared a docker volume addons-759450
	I0717 21:58:39.610282  226587 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:58:39.610311  226587 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 21:58:39.610368  226587 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-759450:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 21:58:44.497876  226587 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-759450:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.887448407s)
	I0717 21:58:44.497911  226587 kic.go:199] duration metric: took 4.887595 seconds to extract preloaded images to volume
	W0717 21:58:44.498049  226587 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 21:58:44.498154  226587 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 21:58:44.548670  226587 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-759450 --name addons-759450 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-759450 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-759450 --network addons-759450 --ip 192.168.49.2 --volume addons-759450:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 21:58:44.860440  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Running}}
	I0717 21:58:44.878654  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:58:44.896455  226587 cli_runner.go:164] Run: docker exec addons-759450 stat /var/lib/dpkg/alternatives/iptables
	I0717 21:58:44.939787  226587 oci.go:144] the created container "addons-759450" has a running status.
	I0717 21:58:44.939818  226587 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa...
	I0717 21:58:45.114367  226587 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 21:58:45.133512  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:58:45.152371  226587 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 21:58:45.152402  226587 kic_runner.go:114] Args: [docker exec --privileged addons-759450 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 21:58:45.220985  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:58:45.240394  226587 machine.go:88] provisioning docker machine ...
	I0717 21:58:45.240430  226587 ubuntu.go:169] provisioning hostname "addons-759450"
	I0717 21:58:45.240497  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:58:45.257945  226587 main.go:141] libmachine: Using SSH client type: native
	I0717 21:58:45.258630  226587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0717 21:58:45.258657  226587 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-759450 && echo "addons-759450" | sudo tee /etc/hostname
	I0717 21:58:45.260552  226587 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50230->127.0.0.1:32772: read: connection reset by peer
	I0717 21:58:48.401949  226587 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-759450
	
	I0717 21:58:48.402046  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:58:48.418480  226587 main.go:141] libmachine: Using SSH client type: native
	I0717 21:58:48.419144  226587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0717 21:58:48.419174  226587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-759450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-759450/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-759450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 21:58:48.543591  226587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 21:58:48.543631  226587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-218877/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-218877/.minikube}
	I0717 21:58:48.543698  226587 ubuntu.go:177] setting up certificates
	I0717 21:58:48.543713  226587 provision.go:83] configureAuth start
	I0717 21:58:48.543783  226587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-759450
	I0717 21:58:48.559668  226587 provision.go:138] copyHostCerts
	I0717 21:58:48.559744  226587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem (1078 bytes)
	I0717 21:58:48.559854  226587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem (1123 bytes)
	I0717 21:58:48.559908  226587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem (1679 bytes)
	I0717 21:58:48.559952  226587 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem org=jenkins.addons-759450 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-759450]
	I0717 21:58:48.632415  226587 provision.go:172] copyRemoteCerts
	I0717 21:58:48.632473  226587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 21:58:48.632508  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:58:48.649197  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:58:48.743884  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 21:58:48.765115  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 21:58:48.786399  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 21:58:48.807404  226587 provision.go:86] duration metric: configureAuth took 263.672764ms
	I0717 21:58:48.807454  226587 ubuntu.go:193] setting minikube options for container-runtime
	I0717 21:58:48.807623  226587 config.go:182] Loaded profile config "addons-759450": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:58:48.807749  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:58:48.824080  226587 main.go:141] libmachine: Using SSH client type: native
	I0717 21:58:48.824473  226587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0717 21:58:48.824489  226587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 21:58:49.034883  226587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 21:58:49.034913  226587 machine.go:91] provisioned docker machine in 3.794499241s
	I0717 21:58:49.034923  226587 client.go:171] LocalClient.Create took 13.413592839s
	I0717 21:58:49.034940  226587 start.go:167] duration metric: libmachine.API.Create for "addons-759450" took 13.413647618s
	I0717 21:58:49.034947  226587 start.go:300] post-start starting for "addons-759450" (driver="docker")
	I0717 21:58:49.034957  226587 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 21:58:49.035010  226587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 21:58:49.035048  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:58:49.051504  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:58:49.145414  226587 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 21:58:49.148663  226587 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 21:58:49.148696  226587 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 21:58:49.148704  226587 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 21:58:49.148714  226587 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 21:58:49.148725  226587 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/addons for local assets ...
	I0717 21:58:49.148785  226587 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/files for local assets ...
	I0717 21:58:49.148810  226587 start.go:303] post-start completed in 113.857287ms
	I0717 21:58:49.149103  226587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-759450
	I0717 21:58:49.165553  226587 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/config.json ...
	I0717 21:58:49.165842  226587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 21:58:49.165897  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:58:49.182723  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:58:49.268310  226587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 21:58:49.272490  226587 start.go:128] duration metric: createHost completed in 13.653498155s
	I0717 21:58:49.272515  226587 start.go:83] releasing machines lock for "addons-759450", held for 13.65366298s
	I0717 21:58:49.272574  226587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-759450
	I0717 21:58:49.288747  226587 ssh_runner.go:195] Run: cat /version.json
	I0717 21:58:49.288807  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:58:49.288816  226587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 21:58:49.288875  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:58:49.305637  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:58:49.306465  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:58:49.488234  226587 ssh_runner.go:195] Run: systemctl --version
	I0717 21:58:49.492239  226587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 21:58:49.628956  226587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 21:58:49.633250  226587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:58:49.650795  226587 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 21:58:49.650881  226587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 21:58:49.676652  226587 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 21:58:49.676674  226587 start.go:466] detecting cgroup driver to use...
	I0717 21:58:49.676708  226587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 21:58:49.676782  226587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 21:58:49.689912  226587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 21:58:49.699522  226587 docker.go:196] disabling cri-docker service (if available) ...
	I0717 21:58:49.699573  226587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 21:58:49.711929  226587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 21:58:49.723995  226587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 21:58:49.796470  226587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 21:58:49.876878  226587 docker.go:212] disabling docker service ...
	I0717 21:58:49.876947  226587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 21:58:49.893905  226587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 21:58:49.904023  226587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 21:58:49.980085  226587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 21:58:50.061265  226587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 21:58:50.071932  226587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 21:58:50.086047  226587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 21:58:50.086106  226587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:58:50.094702  226587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 21:58:50.094758  226587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:58:50.103492  226587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:58:50.112026  226587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 21:58:50.120932  226587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 21:58:50.128985  226587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 21:58:50.136312  226587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 21:58:50.144308  226587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 21:58:50.216638  226587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 21:58:50.320394  226587 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 21:58:50.320478  226587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 21:58:50.323838  226587 start.go:534] Will wait 60s for crictl version
	I0717 21:58:50.323895  226587 ssh_runner.go:195] Run: which crictl
	I0717 21:58:50.326911  226587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 21:58:50.359363  226587 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 21:58:50.359480  226587 ssh_runner.go:195] Run: crio --version
	I0717 21:58:50.393460  226587 ssh_runner.go:195] Run: crio --version
	I0717 21:58:50.429692  226587 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 21:58:50.431219  226587 cli_runner.go:164] Run: docker network inspect addons-759450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 21:58:50.447490  226587 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 21:58:50.451073  226587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:58:50.461437  226587 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:58:50.461519  226587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:58:50.513176  226587 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 21:58:50.513198  226587 crio.go:415] Images already preloaded, skipping extraction
	I0717 21:58:50.513242  226587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 21:58:50.544528  226587 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 21:58:50.544550  226587 cache_images.go:84] Images are preloaded, skipping loading
	I0717 21:58:50.544605  226587 ssh_runner.go:195] Run: crio config
	I0717 21:58:50.586276  226587 cni.go:84] Creating CNI manager for ""
	I0717 21:58:50.586381  226587 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:58:50.586399  226587 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 21:58:50.586425  226587 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-759450 NodeName:addons-759450 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 21:58:50.586595  226587 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-759450"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 21:58:50.586684  226587 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-759450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-759450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 21:58:50.586748  226587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 21:58:50.594911  226587 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 21:58:50.594971  226587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 21:58:50.602679  226587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0717 21:58:50.618238  226587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 21:58:50.634569  226587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0717 21:58:50.650564  226587 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 21:58:50.653770  226587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 21:58:50.663506  226587 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450 for IP: 192.168.49.2
	I0717 21:58:50.663537  226587 certs.go:190] acquiring lock for shared ca certs: {Name:mk5feafb57b96958f78245f8503644226fe57af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:50.663668  226587 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key
	I0717 21:58:50.899733  226587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt ...
	I0717 21:58:50.899763  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt: {Name:mkbb9da8a89070162195bbb2c75b1790f952585a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:50.899946  226587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key ...
	I0717 21:58:50.899963  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key: {Name:mk0e9576bb44c70c2100321d67a2894d29c5f632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:50.900034  226587 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key
	I0717 21:58:51.001691  226587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt ...
	I0717 21:58:51.001722  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt: {Name:mk493fec11c53f5235f608b45b18bff4ece62549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:51.001886  226587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key ...
	I0717 21:58:51.001897  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key: {Name:mkb844f0ff4771ad96c10de8a8f0ef3b4de785a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:51.001996  226587 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.key
	I0717 21:58:51.002020  226587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt with IP's: []
	I0717 21:58:51.138104  226587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt ...
	I0717 21:58:51.138137  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: {Name:mk593e2ad0869a9953103067afdf6c1bd629d566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:51.138305  226587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.key ...
	I0717 21:58:51.138317  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.key: {Name:mka38483f32f0a79600eb373813486d6956ed964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:51.138385  226587 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.key.dd3b5fb2
	I0717 21:58:51.138404  226587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 21:58:51.248949  226587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.crt.dd3b5fb2 ...
	I0717 21:58:51.248983  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.crt.dd3b5fb2: {Name:mkeaaeda6102306a9953e54cb9fe00a4e2b53d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:51.249147  226587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.key.dd3b5fb2 ...
	I0717 21:58:51.249160  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.key.dd3b5fb2: {Name:mk4967043e607a617482241f101f48e6317d8745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:51.249233  226587 certs.go:337] copying /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.crt
	I0717 21:58:51.249297  226587 certs.go:341] copying /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.key
	I0717 21:58:51.249342  226587 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/proxy-client.key
	I0717 21:58:51.249358  226587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/proxy-client.crt with IP's: []
	I0717 21:58:51.397890  226587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/proxy-client.crt ...
	I0717 21:58:51.397920  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/proxy-client.crt: {Name:mkc19ecef5844ad0baabd3fa75d340eb90036b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:51.398082  226587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/proxy-client.key ...
	I0717 21:58:51.398095  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/proxy-client.key: {Name:mk1f69cf4753c6daa96c4e82b360522409446977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:58:51.398256  226587 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 21:58:51.398297  226587 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem (1078 bytes)
	I0717 21:58:51.398323  226587 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem (1123 bytes)
	I0717 21:58:51.398350  226587 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem (1679 bytes)
	I0717 21:58:51.399296  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 21:58:51.422537  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 21:58:51.444012  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 21:58:51.466244  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 21:58:51.487954  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 21:58:51.509326  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 21:58:51.530599  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 21:58:51.552616  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 21:58:51.574124  226587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 21:58:51.595724  226587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 21:58:51.611201  226587 ssh_runner.go:195] Run: openssl version
	I0717 21:58:51.616136  226587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 21:58:51.624359  226587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:58:51.627837  226587 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:58 /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:58:51.627898  226587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 21:58:51.634074  226587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 21:58:51.642477  226587 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 21:58:51.645433  226587 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 21:58:51.645481  226587 kubeadm.go:404] StartCluster: {Name:addons-759450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-759450 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:58:51.645562  226587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 21:58:51.645597  226587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 21:58:51.679862  226587 cri.go:89] found id: ""
	I0717 21:58:51.679936  226587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 21:58:51.688179  226587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 21:58:51.696251  226587 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 21:58:51.696310  226587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 21:58:51.704157  226587 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 21:58:51.704207  226587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 21:58:51.747501  226587 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 21:58:51.747973  226587 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 21:58:51.782331  226587 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 21:58:51.782402  226587 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 21:58:51.782448  226587 kubeadm.go:322] OS: Linux
	I0717 21:58:51.782532  226587 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 21:58:51.782595  226587 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 21:58:51.782688  226587 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 21:58:51.782740  226587 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 21:58:51.782785  226587 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 21:58:51.782834  226587 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 21:58:51.782876  226587 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 21:58:51.782926  226587 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 21:58:51.782983  226587 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 21:58:51.844090  226587 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 21:58:51.844284  226587 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 21:58:51.844389  226587 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 21:58:52.036920  226587 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 21:58:52.040087  226587 out.go:204]   - Generating certificates and keys ...
	I0717 21:58:52.040208  226587 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 21:58:52.040309  226587 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 21:58:52.216230  226587 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 21:58:52.287857  226587 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 21:58:52.488579  226587 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 21:58:52.672238  226587 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 21:58:52.965520  226587 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 21:58:52.965659  226587 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-759450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 21:58:53.087140  226587 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 21:58:53.087382  226587 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-759450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 21:58:53.163801  226587 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 21:58:53.340684  226587 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 21:58:53.614678  226587 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 21:58:53.614769  226587 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 21:58:53.862413  226587 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 21:58:54.036281  226587 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 21:58:54.375068  226587 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 21:58:54.513356  226587 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 21:58:54.521102  226587 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 21:58:54.521861  226587 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 21:58:54.521929  226587 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 21:58:54.593267  226587 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 21:58:54.595561  226587 out.go:204]   - Booting up control plane ...
	I0717 21:58:54.595714  226587 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 21:58:54.595885  226587 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 21:58:54.596827  226587 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 21:58:54.597485  226587 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 21:58:54.599350  226587 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 21:58:59.601337  226587 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001866 seconds
	I0717 21:58:59.601527  226587 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 21:58:59.612944  226587 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 21:59:00.134154  226587 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 21:59:00.134363  226587 kubeadm.go:322] [mark-control-plane] Marking the node addons-759450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 21:59:00.644641  226587 kubeadm.go:322] [bootstrap-token] Using token: xru150.4kq5kisstqkcg3ef
	I0717 21:59:00.646365  226587 out.go:204]   - Configuring RBAC rules ...
	I0717 21:59:00.646483  226587 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 21:59:00.650204  226587 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 21:59:00.658104  226587 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 21:59:00.660881  226587 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 21:59:00.663664  226587 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 21:59:00.666259  226587 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 21:59:00.677094  226587 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 21:59:00.899371  226587 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 21:59:01.066616  226587 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 21:59:01.066640  226587 kubeadm.go:322] 
	I0717 21:59:01.066711  226587 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 21:59:01.066743  226587 kubeadm.go:322] 
	I0717 21:59:01.066840  226587 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 21:59:01.066858  226587 kubeadm.go:322] 
	I0717 21:59:01.066893  226587 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 21:59:01.066974  226587 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 21:59:01.067043  226587 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 21:59:01.067054  226587 kubeadm.go:322] 
	I0717 21:59:01.067125  226587 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 21:59:01.067135  226587 kubeadm.go:322] 
	I0717 21:59:01.067196  226587 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 21:59:01.067209  226587 kubeadm.go:322] 
	I0717 21:59:01.067273  226587 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 21:59:01.067381  226587 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 21:59:01.067486  226587 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 21:59:01.067498  226587 kubeadm.go:322] 
	I0717 21:59:01.067611  226587 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 21:59:01.067714  226587 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 21:59:01.067726  226587 kubeadm.go:322] 
	I0717 21:59:01.067834  226587 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xru150.4kq5kisstqkcg3ef \
	I0717 21:59:01.067961  226587 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 \
	I0717 21:59:01.067991  226587 kubeadm.go:322] 	--control-plane 
	I0717 21:59:01.067997  226587 kubeadm.go:322] 
	I0717 21:59:01.068108  226587 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 21:59:01.068114  226587 kubeadm.go:322] 
	I0717 21:59:01.068219  226587 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xru150.4kq5kisstqkcg3ef \
	I0717 21:59:01.068349  226587 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 
	I0717 21:59:01.070579  226587 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 21:59:01.070729  226587 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 21:59:01.070767  226587 cni.go:84] Creating CNI manager for ""
	I0717 21:59:01.070781  226587 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:59:01.072546  226587 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 21:59:01.073900  226587 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 21:59:01.078593  226587 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 21:59:01.078617  226587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 21:59:01.096808  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 21:59:01.796915  226587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 21:59:01.797017  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=addons-759450 minikube.k8s.io/updated_at=2023_07_17T21_59_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:01.797056  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:01.804003  226587 ops.go:34] apiserver oom_adj: -16
	I0717 21:59:01.883026  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:02.449828  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:02.949439  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:03.449605  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:03.949663  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:04.449458  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:04.950120  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:05.449670  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:05.949352  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:06.449484  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:06.949822  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:07.449771  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:07.949542  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:08.449946  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:08.949940  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:09.449322  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:09.949273  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:10.449976  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:10.950245  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:11.449177  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:11.949901  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:12.450011  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:12.949986  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:13.449823  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:13.949478  226587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 21:59:14.083833  226587 kubeadm.go:1081] duration metric: took 12.286890084s to wait for elevateKubeSystemPrivileges.
	I0717 21:59:14.083869  226587 kubeadm.go:406] StartCluster complete in 22.438392799s
	I0717 21:59:14.083893  226587 settings.go:142] acquiring lock: {Name:mkd04bbc59ef11ead8108410e404fcf464b56f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:59:14.084054  226587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 21:59:14.084681  226587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/kubeconfig: {Name:mkbb3c2ee0d4a9dc4a5c436ca7b4ee88dbc131b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 21:59:14.085535  226587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 21:59:14.085639  226587 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 21:59:14.085769  226587 addons.go:69] Setting volumesnapshots=true in profile "addons-759450"
	I0717 21:59:14.085776  226587 addons.go:69] Setting default-storageclass=true in profile "addons-759450"
	I0717 21:59:14.085789  226587 addons.go:69] Setting ingress-dns=true in profile "addons-759450"
	I0717 21:59:14.085802  226587 addons.go:69] Setting inspektor-gadget=true in profile "addons-759450"
	I0717 21:59:14.085803  226587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-759450"
	I0717 21:59:14.085813  226587 addons.go:231] Setting addon inspektor-gadget=true in "addons-759450"
	I0717 21:59:14.085833  226587 addons.go:69] Setting helm-tiller=true in profile "addons-759450"
	I0717 21:59:14.085844  226587 addons.go:231] Setting addon helm-tiller=true in "addons-759450"
	I0717 21:59:14.085867  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.085887  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.086021  226587 addons.go:69] Setting ingress=true in profile "addons-759450"
	I0717 21:59:14.086065  226587 addons.go:231] Setting addon ingress=true in "addons-759450"
	I0717 21:59:14.086111  226587 addons.go:69] Setting cloud-spanner=true in profile "addons-759450"
	I0717 21:59:14.086150  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.086172  226587 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-759450"
	I0717 21:59:14.086196  226587 addons.go:231] Setting addon cloud-spanner=true in "addons-759450"
	I0717 21:59:14.086234  226587 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-759450"
	I0717 21:59:14.086280  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.086300  226587 addons.go:69] Setting storage-provisioner=true in profile "addons-759450"
	I0717 21:59:14.086312  226587 addons.go:231] Setting addon storage-provisioner=true in "addons-759450"
	I0717 21:59:14.086340  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.086386  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.086468  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.086655  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.086280  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.086714  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.086789  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.085792  226587 addons.go:231] Setting addon volumesnapshots=true in "addons-759450"
	I0717 21:59:14.086832  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.087055  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.087104  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.085770  226587 config.go:182] Loaded profile config "addons-759450": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:59:14.086158  226587 addons.go:69] Setting metrics-server=true in profile "addons-759450"
	I0717 21:59:14.087175  226587 addons.go:231] Setting addon metrics-server=true in "addons-759450"
	I0717 21:59:14.085813  226587 addons.go:231] Setting addon ingress-dns=true in "addons-759450"
	I0717 21:59:14.087209  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.087211  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.087239  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.085823  226587 addons.go:69] Setting gcp-auth=true in profile "addons-759450"
	I0717 21:59:14.087315  226587 mustload.go:65] Loading cluster: addons-759450
	I0717 21:59:14.087507  226587 config.go:182] Loaded profile config "addons-759450": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 21:59:14.087611  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.087730  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.086131  226587 addons.go:69] Setting registry=true in profile "addons-759450"
	I0717 21:59:14.087784  226587 addons.go:231] Setting addon registry=true in "addons-759450"
	I0717 21:59:14.087828  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.087874  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.112792  226587 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 21:59:14.112025  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.114408  226587 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 21:59:14.114443  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 21:59:14.114501  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.114603  226587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:59:14.116187  226587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 21:59:14.118086  226587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:59:14.124708  226587 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 21:59:14.124737  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 21:59:14.124809  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.129053  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 21:59:14.130635  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 21:59:14.136646  226587 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 21:59:14.136663  226587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 21:59:14.136647  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 21:59:14.138122  226587 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 21:59:14.139556  226587 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:59:14.140670  226587 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 21:59:14.140686  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 21:59:14.141905  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.142396  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 21:59:14.142464  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.144094  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 21:59:14.145452  226587 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 21:59:14.145463  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 21:59:14.145506  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.145352  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 21:59:14.147203  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 21:59:14.148551  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 21:59:14.149846  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 21:59:14.152064  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.152094  226587 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 21:59:14.152112  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 21:59:14.152172  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.156573  226587 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 21:59:14.158073  226587 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 21:59:14.158100  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 21:59:14.158160  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.169487  226587 addons.go:231] Setting addon default-storageclass=true in "addons-759450"
	I0717 21:59:14.169540  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.169556  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:14.169903  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:14.173165  226587 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 21:59:14.174646  226587 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 21:59:14.174666  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 21:59:14.174810  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.176202  226587 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 21:59:14.177580  226587 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 21:59:14.178920  226587 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 21:59:14.178936  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 21:59:14.178992  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.181208  226587 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 21:59:14.182677  226587 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 21:59:14.182693  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 21:59:14.182748  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.182178  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.194193  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.194773  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.207536  226587 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 21:59:14.207563  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 21:59:14.207632  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:14.219878  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.221168  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.222594  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.226403  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.228530  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.229540  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.236956  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:14.371872  226587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 21:59:14.376917  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 21:59:14.563573  226587 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 21:59:14.563661  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 21:59:14.571392  226587 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 21:59:14.571484  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 21:59:14.571671  226587 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 21:59:14.571704  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 21:59:14.582572  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 21:59:14.680851  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 21:59:14.763853  226587 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-759450" context rescaled to 1 replicas
	I0717 21:59:14.763902  226587 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 21:59:14.766491  226587 out.go:177] * Verifying Kubernetes components...
	I0717 21:59:14.768258  226587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:59:14.768794  226587 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 21:59:14.768851  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 21:59:14.769144  226587 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 21:59:14.769173  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 21:59:14.772884  226587 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 21:59:14.772903  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 21:59:14.776651  226587 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 21:59:14.776675  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 21:59:14.783261  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 21:59:14.876617  226587 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 21:59:14.876652  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 21:59:14.961929  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 21:59:14.962105  226587 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 21:59:14.962125  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 21:59:14.968230  226587 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 21:59:14.968311  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 21:59:14.974309  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 21:59:15.062762  226587 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 21:59:15.062853  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 21:59:15.067134  226587 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 21:59:15.067214  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 21:59:15.175130  226587 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 21:59:15.175168  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 21:59:15.269880  226587 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 21:59:15.269972  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 21:59:15.280132  226587 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 21:59:15.280211  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 21:59:15.285387  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 21:59:15.360142  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 21:59:15.382390  226587 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 21:59:15.382467  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 21:59:15.576173  226587 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 21:59:15.576247  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 21:59:15.766746  226587 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 21:59:15.766826  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 21:59:15.867143  226587 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 21:59:15.867239  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 21:59:16.062791  226587 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 21:59:16.062883  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 21:59:16.083661  226587 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:59:16.083687  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 21:59:16.161315  226587 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 21:59:16.161343  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 21:59:16.382488  226587 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.01052992s)
	I0717 21:59:16.382534  226587 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 21:59:16.463575  226587 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 21:59:16.463673  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 21:59:16.479732  226587 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 21:59:16.479815  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 21:59:16.670940  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:59:16.761284  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 21:59:16.770661  226587 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 21:59:16.770741  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 21:59:16.960820  226587 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 21:59:16.960910  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 21:59:17.362244  226587 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 21:59:17.362328  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 21:59:17.475926  226587 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 21:59:17.476010  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 21:59:17.679897  226587 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 21:59:17.679976  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 21:59:17.970098  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 21:59:20.365986  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.989022099s)
	I0717 21:59:20.366086  226587 addons.go:467] Verifying addon ingress=true in "addons-759450"
	I0717 21:59:20.366166  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.783561533s)
	I0717 21:59:20.367739  226587 out.go:177] * Verifying ingress addon...
	I0717 21:59:20.366260  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.685376467s)
	I0717 21:59:20.366281  226587 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.598001155s)
	I0717 21:59:20.366307  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.582977435s)
	I0717 21:59:20.366356  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.404401851s)
	I0717 21:59:20.366393  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.392008973s)
	I0717 21:59:20.366460  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.081029212s)
	I0717 21:59:20.366551  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.0063706s)
	I0717 21:59:20.366676  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.695636966s)
	I0717 21:59:20.366763  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.605392112s)
	I0717 21:59:20.369204  226587 addons.go:467] Verifying addon registry=true in "addons-759450"
	I0717 21:59:20.369656  226587 addons.go:467] Verifying addon metrics-server=true in "addons-759450"
	W0717 21:59:20.369688  226587 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 21:59:20.371049  226587 retry.go:31] will retry after 213.738427ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 21:59:20.370115  226587 node_ready.go:35] waiting up to 6m0s for node "addons-759450" to be "Ready" ...
	I0717 21:59:20.370200  226587 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 21:59:20.371392  226587 out.go:177] * Verifying registry addon...
	I0717 21:59:20.373660  226587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 21:59:20.380638  226587 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 21:59:20.380667  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:20.381093  226587 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 21:59:20.381115  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:20.585316  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 21:59:20.879603  226587 node_ready.go:49] node "addons-759450" has status "Ready":"True"
	I0717 21:59:20.879632  226587 node_ready.go:38] duration metric: took 508.560452ms waiting for node "addons-759450" to be "Ready" ...
	I0717 21:59:20.879644  226587 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:59:20.888078  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:20.888133  226587 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 21:59:20.888148  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:20.889315  226587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-rqkkk" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:20.979919  226587 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 21:59:20.980049  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:20.998465  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:21.463613  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:21.463933  226587 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 21:59:21.465093  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:21.488696  226587 addons.go:231] Setting addon gcp-auth=true in "addons-759450"
	I0717 21:59:21.488756  226587 host.go:66] Checking if "addons-759450" exists ...
	I0717 21:59:21.489304  226587 cli_runner.go:164] Run: docker container inspect addons-759450 --format={{.State.Status}}
	I0717 21:59:21.506891  226587 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 21:59:21.506951  226587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-759450
	I0717 21:59:21.523353  226587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/addons-759450/id_rsa Username:docker}
	I0717 21:59:21.884597  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:21.884729  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:22.193521  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.223299908s)
	I0717 21:59:22.193568  226587 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-759450"
	I0717 21:59:22.195382  226587 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 21:59:22.197848  226587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 21:59:22.267600  226587 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 21:59:22.267629  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:22.389312  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:22.389972  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:22.400746  226587 pod_ready.go:92] pod "coredns-5d78c9869d-rqkkk" in "kube-system" namespace has status "Ready":"True"
	I0717 21:59:22.400768  226587 pod_ready.go:81] duration metric: took 1.511433408s waiting for pod "coredns-5d78c9869d-rqkkk" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.400778  226587 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-759450" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.405654  226587 pod_ready.go:92] pod "etcd-addons-759450" in "kube-system" namespace has status "Ready":"True"
	I0717 21:59:22.405674  226587 pod_ready.go:81] duration metric: took 4.890958ms waiting for pod "etcd-addons-759450" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.405686  226587 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-759450" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.410128  226587 pod_ready.go:92] pod "kube-apiserver-addons-759450" in "kube-system" namespace has status "Ready":"True"
	I0717 21:59:22.410148  226587 pod_ready.go:81] duration metric: took 4.456108ms waiting for pod "kube-apiserver-addons-759450" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.410161  226587 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-759450" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.465676  226587 pod_ready.go:92] pod "kube-controller-manager-addons-759450" in "kube-system" namespace has status "Ready":"True"
	I0717 21:59:22.465704  226587 pod_ready.go:81] duration metric: took 55.533833ms waiting for pod "kube-controller-manager-addons-759450" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.465721  226587 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bxr9d" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.480171  226587 pod_ready.go:92] pod "kube-proxy-bxr9d" in "kube-system" namespace has status "Ready":"True"
	I0717 21:59:22.480244  226587 pod_ready.go:81] duration metric: took 14.512201ms waiting for pod "kube-proxy-bxr9d" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.480270  226587 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-759450" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.773330  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:22.841420  226587 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.256056222s)
	I0717 21:59:22.841454  226587 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.334526205s)
	I0717 21:59:22.843553  226587 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 21:59:22.845174  226587 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 21:59:22.846564  226587 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 21:59:22.846582  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 21:59:22.863689  226587 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 21:59:22.863722  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 21:59:22.879725  226587 pod_ready.go:92] pod "kube-scheduler-addons-759450" in "kube-system" namespace has status "Ready":"True"
	I0717 21:59:22.879750  226587 pod_ready.go:81] duration metric: took 399.463529ms waiting for pod "kube-scheduler-addons-759450" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.879760  226587 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:22.880664  226587 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 21:59:22.880689  226587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 21:59:22.885019  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:22.886207  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:22.898744  226587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 21:59:23.272705  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:23.366049  226587 addons.go:467] Verifying addon gcp-auth=true in "addons-759450"
	I0717 21:59:23.369196  226587 out.go:177] * Verifying gcp-auth addon...
	I0717 21:59:23.371664  226587 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 21:59:23.374660  226587 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 21:59:23.374676  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:23.385789  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:23.385881  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:23.775259  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:23.879292  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:23.968826  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:23.970228  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:24.275712  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:24.378626  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:24.385114  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:24.386090  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:24.774937  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:24.879303  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:24.887954  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:24.889036  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:25.275956  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:25.367294  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:25.379983  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:25.467076  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:25.468696  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:25.774772  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:25.879061  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:25.886446  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:25.886775  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:26.275528  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:26.378692  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:26.386722  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:26.387800  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:26.774717  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:26.879464  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:26.887801  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:26.888167  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:27.274230  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:27.379590  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:27.462789  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:27.463000  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:27.773806  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:27.789632  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:27.879146  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:27.884762  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:27.887314  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:28.279928  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:28.378193  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:28.384887  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:28.385468  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:28.774536  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:28.879187  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:28.884891  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:28.885245  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:29.273944  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:29.378693  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:29.385578  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:29.385625  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:29.774014  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:29.878584  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:29.886922  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:29.887214  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:30.273431  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:30.288211  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:30.378422  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:30.385323  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:30.385448  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:30.774041  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:30.878868  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:30.885212  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:30.885327  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:31.273677  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:31.378591  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:31.385330  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:31.386004  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:31.773966  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:31.878474  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:31.884882  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:31.884979  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:32.275045  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:32.289267  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:32.379192  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:32.385335  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:32.386272  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:32.773366  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:32.878875  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:32.886302  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:32.886528  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:33.273127  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:33.378729  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:33.385426  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:33.385727  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:33.777286  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:33.878244  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:33.885275  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:33.885386  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:34.273119  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:34.378440  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:34.385020  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:34.385038  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:34.772469  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:34.788007  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:34.877874  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:34.885512  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:34.885570  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:35.273996  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:35.379708  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:35.385962  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:35.386027  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:35.773189  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:35.879154  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:35.886402  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:35.887524  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:36.273463  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:36.378503  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:36.386118  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:36.386248  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:36.774808  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:36.788395  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:36.878481  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:36.885439  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:36.885478  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:37.273300  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:37.378835  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:37.385543  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:37.385740  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:37.773062  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:37.877461  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:37.884806  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:37.885177  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:38.272755  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:38.378045  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:38.385440  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:38.385671  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:38.773484  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:38.877986  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:38.886158  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:38.886384  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:39.273806  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:39.293390  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:39.378712  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:39.388219  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:39.389235  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:39.773447  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:39.879707  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:39.888309  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:39.893020  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:40.272868  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:40.378304  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:40.385705  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:40.386539  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:40.774128  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:40.878733  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:40.885324  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:40.885540  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:41.274217  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:41.378281  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:41.385286  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:41.385394  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:41.773705  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:41.790191  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:41.878085  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:41.884999  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:41.885977  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:42.274374  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:42.379009  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:42.385963  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:42.387086  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:42.773756  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:42.878692  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:42.885471  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:42.885597  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:43.273354  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:43.379149  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:43.384950  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:43.385057  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:43.772565  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:43.878106  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:43.884342  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:43.885982  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:44.273432  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:44.288153  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:44.378207  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:44.384597  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:44.385694  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:44.773351  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:44.878459  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:44.885222  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:44.885297  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:45.273284  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:45.379069  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:45.386442  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:45.386573  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:45.773529  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:45.879765  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:45.886283  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:45.888339  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:46.273327  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:46.378210  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:46.384895  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:46.386199  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:46.773799  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:46.787914  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:46.878036  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:46.884464  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:46.885659  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:47.274647  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:47.378857  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:47.386340  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:47.386433  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:47.775018  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:47.879785  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:47.885467  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:47.885693  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:48.273198  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:48.378146  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:48.384500  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:48.385844  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:48.773224  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:48.879034  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:48.885578  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:48.885814  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:49.273865  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:49.288357  226587 pod_ready.go:102] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"False"
	I0717 21:59:49.378588  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:49.385321  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:49.385430  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:49.773497  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:49.877816  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:49.885260  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:49.885454  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:50.273234  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:50.378579  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:50.386535  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:50.387181  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:50.775481  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:50.788715  226587 pod_ready.go:92] pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace has status "Ready":"True"
	I0717 21:59:50.788745  226587 pod_ready.go:81] duration metric: took 27.908975837s waiting for pod "metrics-server-844d8db974-gtlnf" in "kube-system" namespace to be "Ready" ...
	I0717 21:59:50.788773  226587 pod_ready.go:38] duration metric: took 29.909096514s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 21:59:50.788795  226587 api_server.go:52] waiting for apiserver process to appear ...
	I0717 21:59:50.788859  226587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 21:59:50.801924  226587 api_server.go:72] duration metric: took 36.037977177s to wait for apiserver process to appear ...
	I0717 21:59:50.801951  226587 api_server.go:88] waiting for apiserver healthz status ...
	I0717 21:59:50.801972  226587 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 21:59:50.864133  226587 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 21:59:50.865376  226587 api_server.go:141] control plane version: v1.27.3
	I0717 21:59:50.865404  226587 api_server.go:131] duration metric: took 63.445576ms to wait for apiserver health ...
	I0717 21:59:50.865417  226587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 21:59:50.878601  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:50.878851  226587 system_pods.go:59] 18 kube-system pods found
	I0717 21:59:50.878881  226587 system_pods.go:61] "coredns-5d78c9869d-rqkkk" [3e0d344b-b2d2-4148-b120-5ce3c09b16ab] Running
	I0717 21:59:50.878893  226587 system_pods.go:61] "csi-hostpath-attacher-0" [7c8c6e0a-7688-4a61-ad0c-27e133c7fed8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 21:59:50.878911  226587 system_pods.go:61] "csi-hostpath-resizer-0" [ba8aa0b9-34b4-4a60-829c-cdfda6efd855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 21:59:50.878926  226587 system_pods.go:61] "csi-hostpathplugin-24f2r" [ec185129-456f-4a82-b042-fcc735d2e89d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 21:59:50.878938  226587 system_pods.go:61] "etcd-addons-759450" [009c41a7-2126-4165-978e-0feb0ea3a9c0] Running
	I0717 21:59:50.878945  226587 system_pods.go:61] "kindnet-9qxf8" [7d01aaba-3101-4933-a545-4d522df5406e] Running
	I0717 21:59:50.878956  226587 system_pods.go:61] "kube-apiserver-addons-759450" [c356b3d7-bc09-4ee7-a10e-8d6426a60490] Running
	I0717 21:59:50.878963  226587 system_pods.go:61] "kube-controller-manager-addons-759450" [05375272-df38-4c29-bcc3-4a057d60859d] Running
	I0717 21:59:50.878973  226587 system_pods.go:61] "kube-ingress-dns-minikube" [d57eeab6-26e5-4953-8c79-25a07898439f] Running
	I0717 21:59:50.878982  226587 system_pods.go:61] "kube-proxy-bxr9d" [d883c565-d498-4ba6-8221-5f19562d94ec] Running
	I0717 21:59:50.878989  226587 system_pods.go:61] "kube-scheduler-addons-759450" [ef7fdc0d-d134-4dc2-9134-9ae2008a8355] Running
	I0717 21:59:50.878998  226587 system_pods.go:61] "metrics-server-844d8db974-gtlnf" [bc43e1d5-26a9-42fc-a9aa-ad13b9d7d6a7] Running
	I0717 21:59:50.879005  226587 system_pods.go:61] "registry-k4mwx" [d04a15e8-945d-4017-9b42-4202fc1327d9] Running
	I0717 21:59:50.879015  226587 system_pods.go:61] "registry-proxy-pchzf" [7055382d-1773-4ccc-bf7e-d773091690c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 21:59:50.879030  226587 system_pods.go:61] "snapshot-controller-75bbb956b9-8njh5" [749dddea-6d75-4e20-8223-afeadcba335c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:59:50.879045  226587 system_pods.go:61] "snapshot-controller-75bbb956b9-9hm85" [9307e92d-08eb-4ec6-aad4-0dfc89c0add8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:59:50.879055  226587 system_pods.go:61] "storage-provisioner" [75603f72-9c20-4d74-b84e-0d23478a7248] Running
	I0717 21:59:50.879068  226587 system_pods.go:61] "tiller-deploy-6847666dc-4vd8n" [1b041e2f-332b-4ca4-bfec-6945f90ce8a2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 21:59:50.879080  226587 system_pods.go:74] duration metric: took 13.656102ms to wait for pod list to return data ...
	I0717 21:59:50.879092  226587 default_sa.go:34] waiting for default service account to be created ...
	I0717 21:59:50.881391  226587 default_sa.go:45] found service account: "default"
	I0717 21:59:50.881419  226587 default_sa.go:55] duration metric: took 2.317186ms for default service account to be created ...
	I0717 21:59:50.881429  226587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 21:59:50.885419  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:50.885875  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:50.890624  226587 system_pods.go:86] 18 kube-system pods found
	I0717 21:59:50.890649  226587 system_pods.go:89] "coredns-5d78c9869d-rqkkk" [3e0d344b-b2d2-4148-b120-5ce3c09b16ab] Running
	I0717 21:59:50.890661  226587 system_pods.go:89] "csi-hostpath-attacher-0" [7c8c6e0a-7688-4a61-ad0c-27e133c7fed8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 21:59:50.890675  226587 system_pods.go:89] "csi-hostpath-resizer-0" [ba8aa0b9-34b4-4a60-829c-cdfda6efd855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 21:59:50.890688  226587 system_pods.go:89] "csi-hostpathplugin-24f2r" [ec185129-456f-4a82-b042-fcc735d2e89d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 21:59:50.890701  226587 system_pods.go:89] "etcd-addons-759450" [009c41a7-2126-4165-978e-0feb0ea3a9c0] Running
	I0717 21:59:50.890709  226587 system_pods.go:89] "kindnet-9qxf8" [7d01aaba-3101-4933-a545-4d522df5406e] Running
	I0717 21:59:50.890716  226587 system_pods.go:89] "kube-apiserver-addons-759450" [c356b3d7-bc09-4ee7-a10e-8d6426a60490] Running
	I0717 21:59:50.890726  226587 system_pods.go:89] "kube-controller-manager-addons-759450" [05375272-df38-4c29-bcc3-4a057d60859d] Running
	I0717 21:59:50.890739  226587 system_pods.go:89] "kube-ingress-dns-minikube" [d57eeab6-26e5-4953-8c79-25a07898439f] Running
	I0717 21:59:50.890747  226587 system_pods.go:89] "kube-proxy-bxr9d" [d883c565-d498-4ba6-8221-5f19562d94ec] Running
	I0717 21:59:50.890752  226587 system_pods.go:89] "kube-scheduler-addons-759450" [ef7fdc0d-d134-4dc2-9134-9ae2008a8355] Running
	I0717 21:59:50.890758  226587 system_pods.go:89] "metrics-server-844d8db974-gtlnf" [bc43e1d5-26a9-42fc-a9aa-ad13b9d7d6a7] Running
	I0717 21:59:50.890764  226587 system_pods.go:89] "registry-k4mwx" [d04a15e8-945d-4017-9b42-4202fc1327d9] Running
	I0717 21:59:50.890776  226587 system_pods.go:89] "registry-proxy-pchzf" [7055382d-1773-4ccc-bf7e-d773091690c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 21:59:50.890791  226587 system_pods.go:89] "snapshot-controller-75bbb956b9-8njh5" [749dddea-6d75-4e20-8223-afeadcba335c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:59:50.890805  226587 system_pods.go:89] "snapshot-controller-75bbb956b9-9hm85" [9307e92d-08eb-4ec6-aad4-0dfc89c0add8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 21:59:50.890816  226587 system_pods.go:89] "storage-provisioner" [75603f72-9c20-4d74-b84e-0d23478a7248] Running
	I0717 21:59:50.890828  226587 system_pods.go:89] "tiller-deploy-6847666dc-4vd8n" [1b041e2f-332b-4ca4-bfec-6945f90ce8a2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 21:59:50.890840  226587 system_pods.go:126] duration metric: took 9.404178ms to wait for k8s-apps to be running ...
	I0717 21:59:50.890852  226587 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 21:59:50.890905  226587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 21:59:50.904379  226587 system_svc.go:56] duration metric: took 13.518184ms WaitForService to wait for kubelet.
	I0717 21:59:50.904404  226587 kubeadm.go:581] duration metric: took 36.140465995s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 21:59:50.904429  226587 node_conditions.go:102] verifying NodePressure condition ...
	I0717 21:59:50.960762  226587 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 21:59:50.960795  226587 node_conditions.go:123] node cpu capacity is 8
	I0717 21:59:50.960845  226587 node_conditions.go:105] duration metric: took 56.410245ms to run NodePressure ...
	I0717 21:59:50.960864  226587 start.go:228] waiting for startup goroutines ...
	I0717 21:59:51.274693  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:51.378728  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:51.385621  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:51.385702  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:51.773749  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:51.879320  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:51.885495  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:51.885683  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:52.274088  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:52.379200  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:52.384686  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:52.386286  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:52.773771  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:52.879345  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:52.885366  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:52.886150  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:53.274218  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:53.378369  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:53.385703  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:53.385876  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 21:59:53.772288  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:53.878557  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:53.885085  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:53.885441  226587 kapi.go:107] duration metric: took 33.511778183s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 21:59:54.273229  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:54.378901  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:54.385203  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:54.772337  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:54.878875  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:54.886100  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:55.272626  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:55.378499  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:55.385315  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:55.776375  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:55.880467  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:55.885072  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:56.274441  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:56.379003  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:56.385571  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:56.773389  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:56.878375  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:56.885575  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:57.272296  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:57.378811  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:57.385216  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:57.773565  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:57.879082  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:57.884584  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:58.273710  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:58.378243  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:58.384969  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:58.773680  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:58.879032  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:58.884269  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:59.273470  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:59.378420  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:59.385039  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 21:59:59.773405  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 21:59:59.878795  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 21:59:59.886676  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:00.272465  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:00.378211  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:00.384569  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:00.774218  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:00.879090  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:00.885483  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:01.273322  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:01.379358  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:01.385135  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:01.774460  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:01.878891  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:01.885858  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:02.274378  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:02.379654  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:02.385644  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:02.774206  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:02.995824  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:02.996352  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:03.277704  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:03.379181  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:03.384517  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:03.775439  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:03.878607  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:03.884939  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:04.273874  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:04.378821  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:04.385499  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:04.773185  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:04.878716  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:04.885928  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:05.274367  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:05.378811  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:05.385646  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:05.772499  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:05.879736  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:05.885965  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:06.273578  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:06.379061  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:06.384877  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:06.772951  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:06.878708  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:06.885631  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:07.273905  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:07.378400  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:07.386916  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:07.772887  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:07.878418  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:07.884498  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:08.273366  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:08.378930  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:08.384916  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:08.773932  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:08.878383  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:08.884831  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:09.273931  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:09.378397  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:09.384821  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:09.774188  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:09.878881  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:09.885324  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:10.273406  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:10.378932  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:10.386129  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:10.775551  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:10.878568  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:10.885099  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:11.273379  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:11.378881  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:11.385062  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:11.773591  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:11.878273  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:11.885463  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:12.273481  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 22:00:12.378760  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:12.385113  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:12.773505  226587 kapi.go:107] duration metric: took 50.575651361s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 22:00:12.878595  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:12.884803  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:13.377986  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:13.384426  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:13.878786  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:13.885498  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:14.378689  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:14.385505  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:14.878373  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:14.884727  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:15.378998  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:15.385914  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:15.878589  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:15.884657  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:16.378970  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:16.385451  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:16.878316  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:16.884375  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:17.378299  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:17.384498  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:17.878003  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:17.885456  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:18.378372  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:18.384636  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:18.878367  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:18.884808  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:19.378442  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:19.384493  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:19.878509  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:19.885127  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:20.378313  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:20.385282  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:20.877869  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:20.885370  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:21.378037  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:21.385323  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:21.878461  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:21.884724  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:22.378402  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:22.384917  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:22.879251  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:22.884336  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:23.378334  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:23.384539  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:23.878358  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:23.884577  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:24.378778  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:24.384975  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:24.878357  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:24.884462  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:25.378987  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:25.385926  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:25.878151  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:25.884444  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:26.378383  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:26.384494  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:26.878313  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:26.886163  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:27.379087  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:27.386254  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:27.878820  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:27.886049  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:28.379252  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:28.385473  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:28.879092  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:28.885861  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:29.378651  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:29.385822  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:29.878545  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:29.885280  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:30.378241  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:30.387306  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:30.879357  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:30.884792  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:31.378606  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:31.385686  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:31.878885  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:31.886028  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:32.378341  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:32.384693  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:32.878468  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:32.885017  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:33.378878  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:33.385013  226587 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 22:00:33.878089  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:33.885259  226587 kapi.go:107] duration metric: took 1m13.515055596s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 22:00:34.378236  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:34.879080  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:35.378736  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:35.877890  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:36.404383  226587 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 22:00:36.878644  226587 kapi.go:107] duration metric: took 1m13.506981591s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 22:00:36.880797  226587 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-759450 cluster.
	I0717 22:00:36.882576  226587 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 22:00:36.884076  226587 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 22:00:36.885641  226587 out.go:177] * Enabled addons: cloud-spanner, inspektor-gadget, helm-tiller, ingress-dns, storage-provisioner, default-storageclass, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 22:00:36.887159  226587 addons.go:502] enable addons completed in 1m22.801524332s: enabled=[cloud-spanner inspektor-gadget helm-tiller ingress-dns storage-provisioner default-storageclass metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 22:00:36.887200  226587 start.go:233] waiting for cluster config update ...
	I0717 22:00:36.887227  226587 start.go:242] writing updated cluster config ...
	I0717 22:00:36.887567  226587 ssh_runner.go:195] Run: rm -f paused
	I0717 22:00:36.936835  226587 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:00:36.938966  226587 out.go:177] * Done! kubectl is now configured to use "addons-759450" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 22:03:20 addons-759450 crio[950]: time="2023-07-17 22:03:20.201424426Z" level=info msg="Removing container: 7650a6361f9a78577bfe96eadcf86cc4b220742b1332263343f24f347609536d" id=607f5981-0cc9-42d8-92ac-59e78b399150 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 22:03:20 addons-759450 crio[950]: time="2023-07-17 22:03:20.217338238Z" level=info msg="Removed container 7650a6361f9a78577bfe96eadcf86cc4b220742b1332263343f24f347609536d: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=607f5981-0cc9-42d8-92ac-59e78b399150 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 22:03:20 addons-759450 crio[950]: time="2023-07-17 22:03:20.772784627Z" level=info msg="Stopping container: 07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4 (timeout: 1s)" id=44b26df6-0235-4394-a266-b53d2bd2e996 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.684885742Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea" id=70c83b26-db13-40cb-9142-4a8f0c8bd2f4 name=/runtime.v1.ImageService/PullImage
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.685645729Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=91675850-c317-48e9-bab1-e278a500909c name=/runtime.v1.ImageService/ImageStatus
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.686275512Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=91675850-c317-48e9-bab1-e278a500909c name=/runtime.v1.ImageService/ImageStatus
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.686990130Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-6fs5h/hello-world-app" id=8d9904de-a316-4803-bc80-cb2071fc9dea name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.687077512Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.758275414Z" level=info msg="Created container 7dffeb973eeeca100ee5cfc30a492af0d0e7a57bf8bff35863935e08deb01a91: default/hello-world-app-65bdb79f98-6fs5h/hello-world-app" id=8d9904de-a316-4803-bc80-cb2071fc9dea name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.758911455Z" level=info msg="Starting container: 7dffeb973eeeca100ee5cfc30a492af0d0e7a57bf8bff35863935e08deb01a91" id=091b0b6a-e31f-4bf7-9b60-f8642f1492b7 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.767498058Z" level=info msg="Started container" PID=9790 containerID=7dffeb973eeeca100ee5cfc30a492af0d0e7a57bf8bff35863935e08deb01a91 description=default/hello-world-app-65bdb79f98-6fs5h/hello-world-app id=091b0b6a-e31f-4bf7-9b60-f8642f1492b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=60b35fdcd0826023bdcb1f6300b4820e27ac5185a6658a92accad75b768ca74c
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.783168649Z" level=warning msg="Stopping container 07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4 with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=44b26df6-0235-4394-a266-b53d2bd2e996 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 22:03:21 addons-759450 conmon[6098]: conmon 07780cacce5242a002b0 <ninfo>: container 6110 exited with status 137
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.925035231Z" level=info msg="Stopped container 07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4: ingress-nginx/ingress-nginx-controller-7799c6795f-2s5vr/controller" id=44b26df6-0235-4394-a266-b53d2bd2e996 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.925581423Z" level=info msg="Stopping pod sandbox: 121014a26343a1e8f4fd619405a170a61de08795549bb479b8e4bb5440bc4dd8" id=c38a1ddd-f2b2-4706-8b5f-b97b4399ea4d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.928680395Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-IKJFS4UDIX7ZRNPV - [0:0]\n:KUBE-HP-ND5JEFIN5GQKKTXH - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-IKJFS4UDIX7ZRNPV\n-X KUBE-HP-ND5JEFIN5GQKKTXH\nCOMMIT\n"
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.929945753Z" level=info msg="Closing host port tcp:80"
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.929977955Z" level=info msg="Closing host port tcp:443"
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.931180561Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.931200135Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.931318046Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-2s5vr Namespace:ingress-nginx ID:121014a26343a1e8f4fd619405a170a61de08795549bb479b8e4bb5440bc4dd8 UID:d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6 NetNS:/var/run/netns/ea651ec5-cda5-4d77-bb18-158160700b2f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.931456014Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-2s5vr from CNI network \"kindnet\" (type=ptp)"
	Jul 17 22:03:21 addons-759450 crio[950]: time="2023-07-17 22:03:21.972940319Z" level=info msg="Stopped pod sandbox: 121014a26343a1e8f4fd619405a170a61de08795549bb479b8e4bb5440bc4dd8" id=c38a1ddd-f2b2-4706-8b5f-b97b4399ea4d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 22:03:22 addons-759450 crio[950]: time="2023-07-17 22:03:22.208516958Z" level=info msg="Removing container: 07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4" id=0549106e-520e-4406-8eb4-978aae2c9398 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 22:03:22 addons-759450 crio[950]: time="2023-07-17 22:03:22.224839254Z" level=info msg="Removed container 07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4: ingress-nginx/ingress-nginx-controller-7799c6795f-2s5vr/controller" id=0549106e-520e-4406-8eb4-978aae2c9398 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7dffeb973eeec       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      7 seconds ago       Running             hello-world-app           0                   60b35fdcd0826       hello-world-app-65bdb79f98-6fs5h
	c78dd40f07932       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   757074e5e18e6       nginx
	72d489de381ff       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   cb28f8e4889a3       headlamp-66f6498c69-tlv4k
	c931ba86ff4b2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   5692a3528d35a       gcp-auth-58478865f7-7wkp7
	aa7fc0a55b530       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   ca346207f9004       ingress-nginx-admission-patch-kzwh7
	e456d064b2268       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   9f114e4b68713       ingress-nginx-admission-create-lqbz4
	054d2b7b6c670       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   d7e9ca77b9464       storage-provisioner
	c309b292e6d0b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   6a9c951d9e8ab       coredns-5d78c9869d-rqkkk
	b304f489904a5       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             4 minutes ago       Running             kindnet-cni               0                   405ce1092874e       kindnet-9qxf8
	becd0b1d3ee83       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                                             4 minutes ago       Running             kube-proxy                0                   00a53b265b661       kube-proxy-bxr9d
	4e91e0b8b1320       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                                             4 minutes ago       Running             kube-controller-manager   0                   441e1ba211ca0       kube-controller-manager-addons-759450
	d873b2d686712       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                                             4 minutes ago       Running             kube-apiserver            0                   c97b281d07989       kube-apiserver-addons-759450
	d3f932f2f5ac7       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                                             4 minutes ago       Running             kube-scheduler            0                   9b58b3a71396c       kube-scheduler-addons-759450
	b0156997fbfed       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   7ee084a545b1d       etcd-addons-759450
	
	* 
	* ==> coredns [c309b292e6d0b7501c29b2e1d3cca30a27600f7ce0fffb4e774bc896a0ed5ac8] <==
	* [INFO] 10.244.0.5:60414 - 24054 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000086547s
	[INFO] 10.244.0.5:56166 - 31777 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003960273s
	[INFO] 10.244.0.5:56166 - 9510 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003987987s
	[INFO] 10.244.0.5:58451 - 24000 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003385068s
	[INFO] 10.244.0.5:58451 - 34498 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005547955s
	[INFO] 10.244.0.5:55605 - 47972 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003386334s
	[INFO] 10.244.0.5:55605 - 53607 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00382575s
	[INFO] 10.244.0.5:34341 - 38875 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006245s
	[INFO] 10.244.0.5:34341 - 64473 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081851s
	[INFO] 10.244.0.18:51464 - 46650 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000258709s
	[INFO] 10.244.0.18:57952 - 58957 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176373s
	[INFO] 10.244.0.18:48209 - 15899 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138939s
	[INFO] 10.244.0.18:56943 - 34248 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145724s
	[INFO] 10.244.0.18:47552 - 10507 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127683s
	[INFO] 10.244.0.18:38401 - 58505 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150012s
	[INFO] 10.244.0.18:49916 - 41688 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005145794s
	[INFO] 10.244.0.18:42724 - 46802 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005479331s
	[INFO] 10.244.0.18:43707 - 65199 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.003840072s
	[INFO] 10.244.0.18:35038 - 39910 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005424796s
	[INFO] 10.244.0.18:47912 - 23231 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004695911s
	[INFO] 10.244.0.18:39830 - 35832 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004741003s
	[INFO] 10.244.0.18:57146 - 12874 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000716132s
	[INFO] 10.244.0.18:53516 - 27190 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000804343s
	[INFO] 10.244.0.20:37865 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196088s
	[INFO] 10.244.0.20:35176 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123812s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-759450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-759450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=addons-759450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T21_59_01_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-759450
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 21:58:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-759450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:03:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:01:35 +0000   Mon, 17 Jul 2023 21:58:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:01:35 +0000   Mon, 17 Jul 2023 21:58:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:01:35 +0000   Mon, 17 Jul 2023 21:58:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:01:35 +0000   Mon, 17 Jul 2023 21:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-759450
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfa438a4e2dd46058de963fb6af566d9
	  System UUID:                4c89c2bc-0d6b-4671-84cd-f32e61255f6b
	  Boot ID:                    7db0a284-d4e9-48b4-92fc-f96afb04e8db
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-6fs5h         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  gcp-auth                    gcp-auth-58478865f7-7wkp7                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  headlamp                    headlamp-66f6498c69-tlv4k                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 coredns-5d78c9869d-rqkkk                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m14s
	  kube-system                 etcd-addons-759450                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m27s
	  kube-system                 kindnet-9qxf8                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m15s
	  kube-system                 kube-apiserver-addons-759450             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-controller-manager-addons-759450    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-proxy-bxr9d                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-scheduler-addons-759450             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node addons-759450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node addons-759450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x8 over 4m34s)  kubelet          Node addons-759450 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s                  kubelet          Node addons-759450 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s                  kubelet          Node addons-759450 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s                  kubelet          Node addons-759450 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m15s                  node-controller  Node addons-759450 event: Registered Node addons-759450 in Controller
	  Normal  NodeReady                4m8s                   kubelet          Node addons-759450 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jul17 21:50] IPv4: martian source 10.244.0.1 from 10.244.0.40, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d2 65 78 9b 40 8c 08 06
	[  +0.235428] IPv4: martian source 10.244.0.1 from 10.244.0.41, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e be 36 40 38 c9 08 06
	[Jul17 21:52] IPv4: martian source 10.244.0.1 from 10.244.0.42, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 76 97 ad 43 c7 08 06
	[ +15.811344] IPv4: martian source 10.244.0.1 from 10.244.0.44, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 37 7d 09 0a 5d 08 06
	[Jul17 21:53] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul17 21:55] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 38 26 56 f6 62 08 06
	[Jul17 22:01] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 2c a0 af 82 37 46 e6 5a fa b2 e3 08 00
	[  +1.023927] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 2c a0 af 82 37 46 e6 5a fa b2 e3 08 00
	[  +2.015817] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 92 2c a0 af 82 37 46 e6 5a fa b2 e3 08 00
	[  +4.191648] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 2c a0 af 82 37 46 e6 5a fa b2 e3 08 00
	[  +8.191412] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 2c a0 af 82 37 46 e6 5a fa b2 e3 08 00
	[ +16.126913] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 2c a0 af 82 37 46 e6 5a fa b2 e3 08 00
	[Jul17 22:02] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 2c a0 af 82 37 46 e6 5a fa b2 e3 08 00
	
	* 
	* ==> etcd [b0156997fbfedb5ad9c2472dc813b533de3194f9356a3e9b4d1a60434774703e] <==
	* {"level":"info","ts":"2023-07-17T21:58:56.367Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T21:58:56.367Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T21:58:56.367Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T21:58:56.367Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:58:56.368Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:58:56.368Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T21:58:56.368Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T21:58:56.368Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-07-17T21:59:15.959Z","caller":"traceutil/trace.go:171","msg":"trace[193623176] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"182.927977ms","start":"2023-07-17T21:59:15.776Z","end":"2023-07-17T21:59:15.959Z","steps":["trace[193623176] 'process raft request'  (duration: 95.74199ms)","trace[193623176] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/serviceaccounts/kube-system/disruption-controller; req_size:202; } (duration: 86.741736ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:59:16.672Z","caller":"traceutil/trace.go:171","msg":"trace[1281495422] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"188.019729ms","start":"2023-07-17T21:59:16.484Z","end":"2023-07-17T21:59:16.672Z","steps":["trace[1281495422] 'process raft request'  (duration: 90.866255ms)","trace[1281495422] 'compare'  (duration: 94.018047ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T21:59:16.672Z","caller":"traceutil/trace.go:171","msg":"trace[2045894319] linearizableReadLoop","detail":"{readStateIndex:431; appliedIndex:430; }","duration":"188.312403ms","start":"2023-07-17T21:59:16.484Z","end":"2023-07-17T21:59:16.672Z","steps":["trace[2045894319] 'read index received'  (duration: 84.282391ms)","trace[2045894319] 'applied index is now lower than readState.Index'  (duration: 104.028325ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:59:16.673Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.448165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-9qxf8\" ","response":"range_response_count:1 size:4699"}
	{"level":"info","ts":"2023-07-17T21:59:16.673Z","caller":"traceutil/trace.go:171","msg":"trace[1356959355] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-9qxf8; range_end:; response_count:1; response_revision:416; }","duration":"188.511533ms","start":"2023-07-17T21:59:16.484Z","end":"2023-07-17T21:59:16.673Z","steps":["trace[1356959355] 'agreement among raft nodes before linearized reading'  (duration: 188.391392ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:59:16.760Z","caller":"traceutil/trace.go:171","msg":"trace[1100151472] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"180.121115ms","start":"2023-07-17T21:59:16.580Z","end":"2023-07-17T21:59:16.760Z","steps":["trace[1100151472] 'process raft request'  (duration: 91.975078ms)","trace[1100151472] 'check requests'  (duration: 87.592792ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:59:17.162Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.441988ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128022498090897337 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:362 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4635 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T21:59:17.162Z","caller":"traceutil/trace.go:171","msg":"trace[1452046598] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"379.490465ms","start":"2023-07-17T21:59:16.782Z","end":"2023-07-17T21:59:17.162Z","steps":["trace[1452046598] 'process raft request'  (duration: 79.117671ms)","trace[1452046598] 'compare'  (duration: 107.110489ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T21:59:17.162Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-17T21:59:16.782Z","time spent":"379.555623ms","remote":"127.0.0.1:58828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4683,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:362 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4635 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >"}
	{"level":"info","ts":"2023-07-17T21:59:17.474Z","caller":"traceutil/trace.go:171","msg":"trace[1037276645] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"101.319317ms","start":"2023-07-17T21:59:17.373Z","end":"2023-07-17T21:59:17.474Z","steps":["trace[1037276645] 'process raft request'  (duration: 88.658489ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T21:59:17.483Z","caller":"traceutil/trace.go:171","msg":"trace[2089527496] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"107.057612ms","start":"2023-07-17T21:59:17.376Z","end":"2023-07-17T21:59:17.483Z","steps":["trace[2089527496] 'process raft request'  (duration: 106.664148ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T22:00:02.699Z","caller":"traceutil/trace.go:171","msg":"trace[1102421951] transaction","detail":"{read_only:false; response_revision:960; number_of_response:1; }","duration":"143.849243ms","start":"2023-07-17T22:00:02.555Z","end":"2023-07-17T22:00:02.699Z","steps":["trace[1102421951] 'process raft request'  (duration: 81.637008ms)","trace[1102421951] 'compare'  (duration: 62.06338ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T22:00:02.993Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.336085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14753"}
	{"level":"warn","ts":"2023-07-17T22:00:02.993Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.672627ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10534"}
	{"level":"info","ts":"2023-07-17T22:00:02.993Z","caller":"traceutil/trace.go:171","msg":"trace[1966206161] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:960; }","duration":"110.479043ms","start":"2023-07-17T22:00:02.883Z","end":"2023-07-17T22:00:02.993Z","steps":["trace[1966206161] 'range keys from in-memory index tree'  (duration: 110.210946ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T22:00:02.993Z","caller":"traceutil/trace.go:171","msg":"trace[1604340017] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:960; }","duration":"116.735402ms","start":"2023-07-17T22:00:02.877Z","end":"2023-07-17T22:00:02.993Z","steps":["trace[1604340017] 'range keys from in-memory index tree'  (duration: 116.572828ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T22:00:03.136Z","caller":"traceutil/trace.go:171","msg":"trace[476450146] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"101.643353ms","start":"2023-07-17T22:00:03.035Z","end":"2023-07-17T22:00:03.136Z","steps":["trace[476450146] 'process raft request'  (duration: 101.529503ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [c931ba86ff4b27fc5609b5c2ad4ab7e14ce065dcb3bc2ee3858a484bd7aed39c] <==
	* 2023/07/17 22:00:35 GCP Auth Webhook started!
	2023/07/17 22:00:37 Ready to marshal response ...
	2023/07/17 22:00:37 Ready to write response ...
	2023/07/17 22:00:37 Ready to marshal response ...
	2023/07/17 22:00:37 Ready to write response ...
	2023/07/17 22:00:37 Ready to marshal response ...
	2023/07/17 22:00:37 Ready to write response ...
	2023/07/17 22:00:47 Ready to marshal response ...
	2023/07/17 22:00:47 Ready to write response ...
	2023/07/17 22:00:50 Ready to marshal response ...
	2023/07/17 22:00:50 Ready to write response ...
	2023/07/17 22:00:52 Ready to marshal response ...
	2023/07/17 22:00:52 Ready to write response ...
	2023/07/17 22:00:53 Ready to marshal response ...
	2023/07/17 22:00:53 Ready to write response ...
	2023/07/17 22:01:09 Ready to marshal response ...
	2023/07/17 22:01:09 Ready to write response ...
	2023/07/17 22:03:18 Ready to marshal response ...
	2023/07/17 22:03:18 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:03:29 up  1:45,  0 users,  load average: 0.29, 0.77, 0.90
	Linux addons-759450 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [b304f489904a5d60084e2e5589456aac314937c40d92df629f5cb841de48137d] <==
	* I0717 22:01:20.191084       1 main.go:227] handling current node
	I0717 22:01:30.195266       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:01:30.195290       1 main.go:227] handling current node
	I0717 22:01:40.206784       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:01:40.206811       1 main.go:227] handling current node
	I0717 22:01:50.211019       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:01:50.211040       1 main.go:227] handling current node
	I0717 22:02:00.223650       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:02:00.223676       1 main.go:227] handling current node
	I0717 22:02:10.228039       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:02:10.228063       1 main.go:227] handling current node
	I0717 22:02:20.233027       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:02:20.233051       1 main.go:227] handling current node
	I0717 22:02:30.237031       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:02:30.237052       1 main.go:227] handling current node
	I0717 22:02:40.249489       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:02:40.249515       1 main.go:227] handling current node
	I0717 22:02:50.253636       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:02:50.253658       1 main.go:227] handling current node
	I0717 22:03:00.262831       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:03:00.262858       1 main.go:227] handling current node
	I0717 22:03:10.266552       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:03:10.266576       1 main.go:227] handling current node
	I0717 22:03:20.271316       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:03:20.271341       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [d873b2d6867121ce847019fe7036cf1f50bee3c16b05cdb7740f0d8e3703d32a] <==
	* E0717 22:01:05.269870       1 upgradeaware.go:426] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.23:34530: read: connection reset by peer
	I0717 22:01:05.347739       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 22:01:26.587937       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:01:26.587991       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:01:26.596753       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:01:26.596808       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:01:26.601105       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:01:26.601156       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:01:26.613140       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:01:26.613482       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:01:26.662770       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:01:26.662839       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:01:26.682784       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:01:26.682867       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 22:01:26.770368       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 22:01:26.770417       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 22:01:27.613501       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 22:01:27.770690       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 22:01:27.775757       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0717 22:01:51.512728       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0717 22:01:51.512760       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 22:01:51.512809       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 22:01:51.512819       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 22:03:19.137615       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.104.175.74]
	
	* 
	* ==> kube-controller-manager [4e91e0b8b1320119f288fa8c145c60f19e9104f3779659b78a353f76324db27c] <==
	* E0717 22:01:44.765369       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:01:50.368481       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:01:50.368512       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:01:58.865771       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:01:58.865804       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:02:02.281111       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:02:02.281145       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:02:03.461679       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:02:03.461711       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:02:14.786460       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:02:14.786495       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:02:32.198263       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:02:32.198296       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:02:46.348762       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:02:46.348796       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:02:51.936385       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:02:51.936416       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 22:02:52.711217       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:02:52.711250       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 22:03:18.985542       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0717 22:03:18.995581       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-6fs5h"
	W0717 22:03:20.409478       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 22:03:20.409512       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 22:03:20.742343       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0717 22:03:20.763580       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	
	* 
	* ==> kube-proxy [becd0b1d3ee83dff2cd0e9d3913f8ef60ecaa7b6f137e67aa653da3e318c40a4] <==
	* I0717 21:59:16.775603       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0717 21:59:16.775717       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0717 21:59:16.775750       1 server_others.go:554] "Using iptables proxy"
	I0717 21:59:17.775515       1 server_others.go:192] "Using iptables Proxier"
	I0717 21:59:17.775652       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 21:59:17.775736       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 21:59:17.775792       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 21:59:17.775860       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 21:59:17.776506       1 server.go:658] "Version info" version="v1.27.3"
	I0717 21:59:17.778933       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 21:59:17.779752       1 config.go:188] "Starting service config controller"
	I0717 21:59:17.779834       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 21:59:17.779802       1 config.go:97] "Starting endpoint slice config controller"
	I0717 21:59:17.780032       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 21:59:17.780197       1 config.go:315] "Starting node config controller"
	I0717 21:59:17.780638       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 21:59:17.966923       1 shared_informer.go:318] Caches are synced for node config
	I0717 21:59:17.966967       1 shared_informer.go:318] Caches are synced for service config
	I0717 21:59:17.980513       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [d3f932f2f5ac70f5392d1e6395f7f959713db52164eb07ee80175d8de99635d9] <==
	* W0717 21:58:57.872578       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:58:57.872601       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 21:58:57.873097       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 21:58:57.873119       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 21:58:57.873209       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 21:58:57.873239       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 21:58:57.873216       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:58:57.873266       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 21:58:57.873378       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 21:58:57.873401       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 21:58:57.873402       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 21:58:57.873417       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 21:58:57.873485       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 21:58:57.873512       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 21:58:58.725144       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 21:58:58.725204       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 21:58:58.788360       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 21:58:58.788399       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 21:58:58.792033       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 21:58:58.792078       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 21:58:58.830607       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 21:58:58.830642       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 21:58:58.943855       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 21:58:58.943887       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0717 21:59:01.464914       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 22:03:20 addons-759450 kubelet[1557]: E0717 22:03:20.775086    1557 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-2s5vr.1772c6ce9460460b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-2s5vr", UID:"d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6", APIVersion:"v1", ResourceVersion:"754", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Compon
ent:"kubelet", Host:"addons-759450"}, FirstTimestamp:time.Date(2023, time.July, 17, 22, 3, 20, 772257291, time.Local), LastTimestamp:time.Date(2023, time.July, 17, 22, 3, 20, 772257291, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-2s5vr.1772c6ce9460460b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 22:03:20 addons-759450 kubelet[1557]: I0717 22:03:20.985108    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=270278be-5bc6-401d-a547-d267e461ff87 path="/var/lib/kubelet/pods/270278be-5bc6-401d-a547-d267e461ff87/volumes"
	Jul 17 22:03:20 addons-759450 kubelet[1557]: I0717 22:03:20.985551    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a3970d69-7c1c-4d69-adeb-86a0d2400954 path="/var/lib/kubelet/pods/a3970d69-7c1c-4d69-adeb-86a0d2400954/volumes"
	Jul 17 22:03:20 addons-759450 kubelet[1557]: I0717 22:03:20.986013    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d57eeab6-26e5-4953-8c79-25a07898439f path="/var/lib/kubelet/pods/d57eeab6-26e5-4953-8c79-25a07898439f/volumes"
	Jul 17 22:03:21 addons-759450 kubelet[1557]: E0717 22:03:21.066539    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0563a699f6945a2461baea70939b4601ffa843a7cad36e7105e7ed392101fb16/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0563a699f6945a2461baea70939b4601ffa843a7cad36e7105e7ed392101fb16/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 22:03:21 addons-759450 kubelet[1557]: E0717 22:03:21.068698    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a17a64c622e252f18bf161d98a17b37d4f7eef85b28306b1863a1dae4d9e0620/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a17a64c622e252f18bf161d98a17b37d4f7eef85b28306b1863a1dae4d9e0620/diff: no such file or directory, extraDiskErr: <nil>
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.086705    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nmsp\" (UniqueName: \"kubernetes.io/projected/d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6-kube-api-access-5nmsp\") pod \"d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6\" (UID: \"d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6\") "
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.086754    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6-webhook-cert\") pod \"d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6\" (UID: \"d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6\") "
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.088508    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6" (UID: "d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.088687    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6-kube-api-access-5nmsp" (OuterVolumeSpecName: "kube-api-access-5nmsp") pod "d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6" (UID: "d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6"). InnerVolumeSpecName "kube-api-access-5nmsp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.187652    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5nmsp\" (UniqueName: \"kubernetes.io/projected/d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6-kube-api-access-5nmsp\") on node \"addons-759450\" DevicePath \"\""
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.187687    1557 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6-webhook-cert\") on node \"addons-759450\" DevicePath \"\""
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.207506    1557 scope.go:115] "RemoveContainer" containerID="07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4"
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.216661    1557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-6fs5h" podStartSLOduration=1.92366949 podCreationTimestamp="2023-07-17 22:03:18 +0000 UTC" firstStartedPulling="2023-07-17 22:03:19.39221901 +0000 UTC m=+258.530636115" lastFinishedPulling="2023-07-17 22:03:21.685161038 +0000 UTC m=+260.823578131" observedRunningTime="2023-07-17 22:03:22.216558046 +0000 UTC m=+261.354975158" watchObservedRunningTime="2023-07-17 22:03:22.216611506 +0000 UTC m=+261.355028618"
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.225097    1557 scope.go:115] "RemoveContainer" containerID="07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4"
	Jul 17 22:03:22 addons-759450 kubelet[1557]: E0717 22:03:22.225508    1557 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4\": container with ID starting with 07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4 not found: ID does not exist" containerID="07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4"
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.225567    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4} err="failed to get container status \"07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4\": rpc error: code = NotFound desc = could not find container \"07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4\": container with ID starting with 07780cacce5242a002b0446c9df20eef163b352748ecd797de630b7bd21e1ee4 not found: ID does not exist"
	Jul 17 22:03:22 addons-759450 kubelet[1557]: I0717 22:03:22.984466    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6 path="/var/lib/kubelet/pods/d85e2e69-dc45-48d1-bc6f-79ecf6ecbaf6/volumes"
	Jul 17 22:03:23 addons-759450 kubelet[1557]: W0717 22:03:23.125039    1557 container.go:586] Failed to update stats for container "/crio-44ff5b375852b82f19fb875c1dcb030f817f41d081236b50d5ad6c676fa06f90": unable to determine device info for dir: /var/lib/containers/storage/overlay/4b316877256d0ed6623ee381e115cfb87fdf38f3cd900b27392cf923fdf35993/diff: stat failed on /var/lib/containers/storage/overlay/4b316877256d0ed6623ee381e115cfb87fdf38f3cd900b27392cf923fdf35993/diff with error: no such file or directory, continuing to push stats
	Jul 17 22:03:24 addons-759450 kubelet[1557]: W0717 22:03:24.704740    1557 container.go:586] Failed to update stats for container "/docker/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830/crio-8d40fdc00d112aa62ec784ac35dfa9b7f671c748ba138578b155711785d9c9c0": unable to determine device info for dir: /var/lib/containers/storage/overlay/b6e996877612157b411e7257ad1b05d2f6cb05446505e052d71fccda9b509ac3/diff: stat failed on /var/lib/containers/storage/overlay/b6e996877612157b411e7257ad1b05d2f6cb05446505e052d71fccda9b509ac3/diff with error: no such file or directory, continuing to push stats
	Jul 17 22:03:24 addons-759450 kubelet[1557]: W0717 22:03:24.736358    1557 container.go:586] Failed to update stats for container "/docker/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830/crio-be9a5712479324225b83c902aa37d983d96cc28e9db33c329d21fdb475a798bf": unable to determine device info for dir: /var/lib/containers/storage/overlay/9f927e40975a8cdd7a63ed1470703a299e01f793fed374852990073455679a01/diff: stat failed on /var/lib/containers/storage/overlay/9f927e40975a8cdd7a63ed1470703a299e01f793fed374852990073455679a01/diff with error: no such file or directory, continuing to push stats
	Jul 17 22:03:25 addons-759450 kubelet[1557]: W0717 22:03:25.774796    1557 container.go:586] Failed to update stats for container "/crio-8d40fdc00d112aa62ec784ac35dfa9b7f671c748ba138578b155711785d9c9c0": unable to determine device info for dir: /var/lib/containers/storage/overlay/b6e996877612157b411e7257ad1b05d2f6cb05446505e052d71fccda9b509ac3/diff: stat failed on /var/lib/containers/storage/overlay/b6e996877612157b411e7257ad1b05d2f6cb05446505e052d71fccda9b509ac3/diff with error: no such file or directory, continuing to push stats
	Jul 17 22:03:26 addons-759450 kubelet[1557]: W0717 22:03:26.613682    1557 container.go:586] Failed to update stats for container "/docker/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830/crio-3159493b837a8b22682e87685e55fa7ddb18e593cd62c7d9a836b1a0f26ba150": unable to determine device info for dir: /var/lib/containers/storage/overlay/a17a64c622e252f18bf161d98a17b37d4f7eef85b28306b1863a1dae4d9e0620/diff: stat failed on /var/lib/containers/storage/overlay/a17a64c622e252f18bf161d98a17b37d4f7eef85b28306b1863a1dae4d9e0620/diff with error: no such file or directory, continuing to push stats
	Jul 17 22:03:27 addons-759450 kubelet[1557]: W0717 22:03:27.709786    1557 container.go:586] Failed to update stats for container "/crio-234c51406d45012d2b9d59d4ab762216dbb81f04e516f0ce62e372b73776fed6": unable to determine device info for dir: /var/lib/containers/storage/overlay/31fb9538a3bc4dfcdf48a322d1adfcd42e4eed31a189eeca636c668c626286a0/diff: stat failed on /var/lib/containers/storage/overlay/31fb9538a3bc4dfcdf48a322d1adfcd42e4eed31a189eeca636c668c626286a0/diff with error: no such file or directory, continuing to push stats
	Jul 17 22:03:28 addons-759450 kubelet[1557]: W0717 22:03:28.256821    1557 container.go:586] Failed to update stats for container "/docker/1ac74dbcb5be16ff84cde4de8ee804f340923f8f144a250c0e57f41478b19830/crio-234c51406d45012d2b9d59d4ab762216dbb81f04e516f0ce62e372b73776fed6": unable to determine device info for dir: /var/lib/containers/storage/overlay/31fb9538a3bc4dfcdf48a322d1adfcd42e4eed31a189eeca636c668c626286a0/diff: stat failed on /var/lib/containers/storage/overlay/31fb9538a3bc4dfcdf48a322d1adfcd42e4eed31a189eeca636c668c626286a0/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [054d2b7b6c670d773a070830bba05ea0172fdf89e022ee281b3360644fa9c088] <==
	* I0717 21:59:21.578865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 21:59:21.660212       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 21:59:21.660263       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 21:59:21.667577       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 21:59:21.667650       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12128465-397c-4961-b5b3-7e76dd7c839b", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-759450_7f28c545-3f72-47b4-9c66-37bad129e445 became leader
	I0717 21:59:21.667732       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-759450_7f28c545-3f72-47b4-9c66-37bad129e445!
	I0717 21:59:21.767911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-759450_7f28c545-3f72-47b4-9c66-37bad129e445!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-759450 -n addons-759450
helpers_test.go:261: (dbg) Run:  kubectl --context addons-759450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (158.19s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (531.933317ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (181.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-988346 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-988346 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.484478725s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-988346 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-988346 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ac3f7865-131f-460e-8077-0ba5037cd1bb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ac3f7865-131f-460e-8077-0ba5037cd1bb] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.00564057s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988346 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0717 22:10:36.955583  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:11:04.640743  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:11:42.740011  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:42.745290  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:42.755583  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:42.775878  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:42.816201  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:42.896553  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:43.056998  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:43.377521  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:44.018428  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:45.299185  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:47.859554  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:11:52.980236  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-988346 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.917885726s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-988346 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988346 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0717 22:12:03.220419  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.012013824s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988346 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-988346 addons disable ingress-dns --alsologtostderr -v=1: (2.123098673s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988346 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-988346 addons disable ingress --alsologtostderr -v=1: (7.390963509s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-988346
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-988346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "729d2601f7f6e059fdfa392033cee4e1789686d5427c7c315ae507ef6dee9dd3",
	        "Created": "2023-07-17T22:08:02.968622351Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265026,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:08:03.249845974Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/729d2601f7f6e059fdfa392033cee4e1789686d5427c7c315ae507ef6dee9dd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/729d2601f7f6e059fdfa392033cee4e1789686d5427c7c315ae507ef6dee9dd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/729d2601f7f6e059fdfa392033cee4e1789686d5427c7c315ae507ef6dee9dd3/hosts",
	        "LogPath": "/var/lib/docker/containers/729d2601f7f6e059fdfa392033cee4e1789686d5427c7c315ae507ef6dee9dd3/729d2601f7f6e059fdfa392033cee4e1789686d5427c7c315ae507ef6dee9dd3-json.log",
	        "Name": "/ingress-addon-legacy-988346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-988346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-988346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5b7844df30a33e75654ce459a4ae9143cd45f6e36eff6d755f81a3f7de3b46b-init/diff:/var/lib/docker/overlay2/08d413eb0908d02df131d41f2ca629e52ff8a5bbd0c0c3f9b2a348a71c834d30/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5b7844df30a33e75654ce459a4ae9143cd45f6e36eff6d755f81a3f7de3b46b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5b7844df30a33e75654ce459a4ae9143cd45f6e36eff6d755f81a3f7de3b46b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5b7844df30a33e75654ce459a4ae9143cd45f6e36eff6d755f81a3f7de3b46b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-988346",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-988346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-988346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-988346",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-988346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0e6c438767aa38b88a81f017a9b31311a9b8ef0c3a116efa6347299d1578416",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c0e6c438767a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-988346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "729d2601f7f6",
	                        "ingress-addon-legacy-988346"
	                    ],
	                    "NetworkID": "a5653d04b115d0dfc5621018afdee37568b1ed5abc1694aecec30faebb9cc8f3",
	                    "EndpointID": "8eaf5b42784b4633fd25156e44c7765704c957c50bee639f003848e40b1adb79",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-988346 -n ingress-addon-legacy-988346
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-988346 logs -n 25: (1.052578653s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-994983                                                   | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472920255/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-994983                                                   | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472920255/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-994983 ssh findmnt                                          | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-994983 ssh findmnt                                          | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-994983 ssh findmnt                                          | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-994983 ssh findmnt                                          | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-994983                                                   | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| image          | functional-994983                                                      | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-994983                                                      | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-994983 ssh pgrep                                            | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-994983 image build -t                                       | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | localhost/my-image:functional-994983                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-994983 image ls                                             | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	| image          | functional-994983                                                      | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-994983                                                      | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-994983                                                      | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-994983                                                      | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-994983                                                      | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-994983                                                   | functional-994983           | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:07 UTC |
	| start          | -p ingress-addon-legacy-988346                                         | ingress-addon-legacy-988346 | jenkins | v1.31.0 | 17 Jul 23 22:07 UTC | 17 Jul 23 22:09 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-988346                                            | ingress-addon-legacy-988346 | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC | 17 Jul 23 22:09 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-988346                                            | ingress-addon-legacy-988346 | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC | 17 Jul 23 22:09 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-988346                                            | ingress-addon-legacy-988346 | jenkins | v1.31.0 | 17 Jul 23 22:09 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-988346 ip                                         | ingress-addon-legacy-988346 | jenkins | v1.31.0 | 17 Jul 23 22:11 UTC | 17 Jul 23 22:11 UTC |
	| addons         | ingress-addon-legacy-988346                                            | ingress-addon-legacy-988346 | jenkins | v1.31.0 | 17 Jul 23 22:12 UTC | 17 Jul 23 22:12 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-988346                                            | ingress-addon-legacy-988346 | jenkins | v1.31.0 | 17 Jul 23 22:12 UTC | 17 Jul 23 22:12 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:07:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:07:38.843652  264375 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:07:38.843806  264375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:07:38.843816  264375 out.go:309] Setting ErrFile to fd 2...
	I0717 22:07:38.843821  264375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:07:38.844060  264375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:07:38.844701  264375 out.go:303] Setting JSON to false
	I0717 22:07:38.845676  264375 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6603,"bootTime":1689625056,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:07:38.845746  264375 start.go:138] virtualization: kvm guest
	I0717 22:07:38.848369  264375 out.go:177] * [ingress-addon-legacy-988346] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:07:38.850239  264375 notify.go:220] Checking for updates...
	I0717 22:07:38.850242  264375 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:07:38.851797  264375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:07:38.853216  264375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:07:38.854849  264375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 22:07:38.856416  264375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:07:38.857918  264375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:07:38.859628  264375 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:07:38.885628  264375 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:07:38.885716  264375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:07:38.947194  264375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 22:07:38.938457543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:07:38.947308  264375 docker.go:294] overlay module found
	I0717 22:07:38.949255  264375 out.go:177] * Using the docker driver based on user configuration
	I0717 22:07:38.950747  264375 start.go:298] selected driver: docker
	I0717 22:07:38.950765  264375 start.go:880] validating driver "docker" against <nil>
	I0717 22:07:38.950778  264375 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:07:38.951608  264375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:07:39.011711  264375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 22:07:39.002820808 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:07:39.011923  264375 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 22:07:39.012102  264375 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:07:39.013929  264375 out.go:177] * Using Docker driver with root privileges
	I0717 22:07:39.015362  264375 cni.go:84] Creating CNI manager for ""
	I0717 22:07:39.015381  264375 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 22:07:39.015392  264375 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 22:07:39.015402  264375 start_flags.go:319] config:
	{Name:ingress-addon-legacy-988346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:07:39.017056  264375 out.go:177] * Starting control plane node ingress-addon-legacy-988346 in cluster ingress-addon-legacy-988346
	I0717 22:07:39.018496  264375 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 22:07:39.019884  264375 out.go:177] * Pulling base image ...
	I0717 22:07:39.021188  264375 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 22:07:39.021299  264375 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:07:39.038181  264375 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 22:07:39.038210  264375 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 22:07:39.248824  264375 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0717 22:07:39.248863  264375 cache.go:57] Caching tarball of preloaded images
	I0717 22:07:39.249051  264375 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 22:07:39.251235  264375 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 22:07:39.252772  264375 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 22:07:39.355766  264375 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0717 22:07:54.700709  264375 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 22:07:54.700803  264375 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 22:07:55.652915  264375 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0717 22:07:55.653324  264375 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/config.json ...
	I0717 22:07:55.653379  264375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/config.json: {Name:mk6bd041d8881157880ce68e9ff6d701cf407f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:07:55.653568  264375 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:07:55.653593  264375 start.go:365] acquiring machines lock for ingress-addon-legacy-988346: {Name:mk5da3fa154e5be761f2cc3dbe61712e96d4d7fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:07:55.653634  264375 start.go:369] acquired machines lock for "ingress-addon-legacy-988346" in 30.159µs
	I0717 22:07:55.653652  264375 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-988346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988346 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:07:55.653720  264375 start.go:125] createHost starting for "" (driver="docker")
	I0717 22:07:55.655940  264375 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0717 22:07:55.656203  264375 start.go:159] libmachine.API.Create for "ingress-addon-legacy-988346" (driver="docker")
	I0717 22:07:55.656231  264375 client.go:168] LocalClient.Create starting
	I0717 22:07:55.656294  264375 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem
	I0717 22:07:55.656325  264375 main.go:141] libmachine: Decoding PEM data...
	I0717 22:07:55.656340  264375 main.go:141] libmachine: Parsing certificate...
	I0717 22:07:55.656396  264375 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem
	I0717 22:07:55.656413  264375 main.go:141] libmachine: Decoding PEM data...
	I0717 22:07:55.656425  264375 main.go:141] libmachine: Parsing certificate...
	I0717 22:07:55.656744  264375 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 22:07:55.672940  264375 cli_runner.go:211] docker network inspect ingress-addon-legacy-988346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 22:07:55.673024  264375 network_create.go:281] running [docker network inspect ingress-addon-legacy-988346] to gather additional debugging logs...
	I0717 22:07:55.673048  264375 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988346
	W0717 22:07:55.689670  264375 cli_runner.go:211] docker network inspect ingress-addon-legacy-988346 returned with exit code 1
	I0717 22:07:55.689703  264375 network_create.go:284] error running [docker network inspect ingress-addon-legacy-988346]: docker network inspect ingress-addon-legacy-988346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-988346 not found
	I0717 22:07:55.689718  264375 network_create.go:286] output of [docker network inspect ingress-addon-legacy-988346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-988346 not found
	
	** /stderr **
	I0717 22:07:55.689788  264375 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:07:55.706008  264375 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001fb610}
	I0717 22:07:55.706061  264375 network_create.go:123] attempt to create docker network ingress-addon-legacy-988346 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 22:07:55.706119  264375 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-988346 ingress-addon-legacy-988346
	I0717 22:07:55.762105  264375 network_create.go:107] docker network ingress-addon-legacy-988346 192.168.49.0/24 created
	I0717 22:07:55.762155  264375 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-988346" container
	I0717 22:07:55.762254  264375 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 22:07:55.777791  264375 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-988346 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988346 --label created_by.minikube.sigs.k8s.io=true
	I0717 22:07:55.795882  264375 oci.go:103] Successfully created a docker volume ingress-addon-legacy-988346
	I0717 22:07:55.795964  264375 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-988346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988346 --entrypoint /usr/bin/test -v ingress-addon-legacy-988346:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 22:07:57.526377  264375 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-988346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988346 --entrypoint /usr/bin/test -v ingress-addon-legacy-988346:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.730338447s)
	I0717 22:07:57.526411  264375 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-988346
	I0717 22:07:57.526442  264375 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 22:07:57.526464  264375 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 22:07:57.526544  264375 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-988346:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 22:08:02.899784  264375 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-988346:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (5.373182722s)
	I0717 22:08:02.899817  264375 kic.go:199] duration metric: took 5.373348 seconds to extract preloaded images to volume
	W0717 22:08:02.899968  264375 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 22:08:02.900086  264375 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 22:08:02.953546  264375 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-988346 --name ingress-addon-legacy-988346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-988346 --network ingress-addon-legacy-988346 --ip 192.168.49.2 --volume ingress-addon-legacy-988346:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:08:03.257322  264375 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988346 --format={{.State.Running}}
	I0717 22:08:03.274910  264375 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988346 --format={{.State.Status}}
	I0717 22:08:03.292930  264375 cli_runner.go:164] Run: docker exec ingress-addon-legacy-988346 stat /var/lib/dpkg/alternatives/iptables
	I0717 22:08:03.347455  264375 oci.go:144] the created container "ingress-addon-legacy-988346" has a running status.
	I0717 22:08:03.347493  264375 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa...
	I0717 22:08:03.507633  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 22:08:03.507684  264375 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 22:08:03.528383  264375 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988346 --format={{.State.Status}}
	I0717 22:08:03.547525  264375 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 22:08:03.547548  264375 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-988346 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 22:08:03.607039  264375 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988346 --format={{.State.Status}}
	I0717 22:08:03.626010  264375 machine.go:88] provisioning docker machine ...
	I0717 22:08:03.626052  264375 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-988346"
	I0717 22:08:03.626120  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:03.645000  264375 main.go:141] libmachine: Using SSH client type: native
	I0717 22:08:03.645522  264375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0717 22:08:03.645539  264375 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-988346 && echo "ingress-addon-legacy-988346" | sudo tee /etc/hostname
	I0717 22:08:03.646277  264375 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53634->127.0.0.1:32787: read: connection reset by peer
	I0717 22:08:06.781708  264375 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-988346
	
	I0717 22:08:06.781792  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:06.797666  264375 main.go:141] libmachine: Using SSH client type: native
	I0717 22:08:06.798245  264375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0717 22:08:06.798276  264375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-988346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-988346/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-988346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:08:06.923436  264375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:08:06.923469  264375 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-218877/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-218877/.minikube}
	I0717 22:08:06.923503  264375 ubuntu.go:177] setting up certificates
	I0717 22:08:06.923517  264375 provision.go:83] configureAuth start
	I0717 22:08:06.923580  264375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988346
	I0717 22:08:06.939522  264375 provision.go:138] copyHostCerts
	I0717 22:08:06.939562  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem
	I0717 22:08:06.939591  264375 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem, removing ...
	I0717 22:08:06.939601  264375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem
	I0717 22:08:06.939666  264375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem (1679 bytes)
	I0717 22:08:06.939735  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem
	I0717 22:08:06.939751  264375 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem, removing ...
	I0717 22:08:06.939755  264375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem
	I0717 22:08:06.939777  264375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem (1078 bytes)
	I0717 22:08:06.939821  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem
	I0717 22:08:06.939836  264375 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem, removing ...
	I0717 22:08:06.939842  264375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem
	I0717 22:08:06.939863  264375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem (1123 bytes)
	I0717 22:08:06.939907  264375 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-988346 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-988346]
	I0717 22:08:07.046280  264375 provision.go:172] copyRemoteCerts
	I0717 22:08:07.046340  264375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:08:07.046375  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:07.062068  264375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa Username:docker}
	I0717 22:08:07.151874  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:08:07.151952  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:08:07.172809  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:08:07.172882  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:08:07.194335  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:08:07.194403  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 22:08:07.216845  264375 provision.go:86] duration metric: configureAuth took 293.307717ms
	I0717 22:08:07.216884  264375 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:08:07.217053  264375 config.go:182] Loaded profile config "ingress-addon-legacy-988346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 22:08:07.217154  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:07.233897  264375 main.go:141] libmachine: Using SSH client type: native
	I0717 22:08:07.234339  264375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0717 22:08:07.234358  264375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:08:07.467054  264375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:08:07.467083  264375 machine.go:91] provisioned docker machine in 3.841047344s
	I0717 22:08:07.467093  264375 client.go:171] LocalClient.Create took 11.810857895s
	I0717 22:08:07.467115  264375 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-988346" took 11.810911656s
	I0717 22:08:07.467122  264375 start.go:300] post-start starting for "ingress-addon-legacy-988346" (driver="docker")
	I0717 22:08:07.467134  264375 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:08:07.467199  264375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:08:07.467263  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:07.483969  264375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa Username:docker}
	I0717 22:08:07.575880  264375 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:08:07.578783  264375 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:08:07.578810  264375 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:08:07.578818  264375 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:08:07.578828  264375 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 22:08:07.578839  264375 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/addons for local assets ...
	I0717 22:08:07.578895  264375 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/files for local assets ...
	I0717 22:08:07.578980  264375 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> 2256422.pem in /etc/ssl/certs
	I0717 22:08:07.578991  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> /etc/ssl/certs/2256422.pem
	I0717 22:08:07.579096  264375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:08:07.586599  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem --> /etc/ssl/certs/2256422.pem (1708 bytes)
	I0717 22:08:07.607267  264375 start.go:303] post-start completed in 140.127242ms
	I0717 22:08:07.607658  264375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988346
	I0717 22:08:07.628312  264375 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/config.json ...
	I0717 22:08:07.628541  264375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:08:07.628585  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:07.644036  264375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa Username:docker}
	I0717 22:08:07.732092  264375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:08:07.735985  264375 start.go:128] duration metric: createHost completed in 12.082251599s
	I0717 22:08:07.736012  264375 start.go:83] releasing machines lock for "ingress-addon-legacy-988346", held for 12.082368271s
	I0717 22:08:07.736089  264375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988346
	I0717 22:08:07.751583  264375 ssh_runner.go:195] Run: cat /version.json
	I0717 22:08:07.751640  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:07.751677  264375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:08:07.751727  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:07.771012  264375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa Username:docker}
	I0717 22:08:07.771482  264375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa Username:docker}
	I0717 22:08:07.952805  264375 ssh_runner.go:195] Run: systemctl --version
	I0717 22:08:07.956799  264375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:08:08.093616  264375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:08:08.097751  264375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:08:08.115371  264375 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:08:08.115478  264375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:08:08.140831  264375 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 22:08:08.140851  264375 start.go:466] detecting cgroup driver to use...
	I0717 22:08:08.140882  264375 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:08:08.140930  264375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:08:08.154528  264375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:08:08.164313  264375 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:08:08.164360  264375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:08:08.176099  264375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:08:08.188346  264375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:08:08.260006  264375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:08:08.332584  264375 docker.go:212] disabling docker service ...
	I0717 22:08:08.332649  264375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:08:08.349275  264375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:08:08.359301  264375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:08:08.442866  264375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:08:08.519706  264375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:08:08.530063  264375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:08:08.544064  264375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 22:08:08.544118  264375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:08:08.552412  264375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:08:08.552470  264375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:08:08.560805  264375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:08:08.569082  264375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:08:08.577454  264375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:08:08.585188  264375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:08:08.592273  264375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:08:08.599348  264375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:08:08.667898  264375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:08:08.784729  264375 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:08:08.784796  264375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:08:08.788215  264375 start.go:534] Will wait 60s for crictl version
	I0717 22:08:08.788264  264375 ssh_runner.go:195] Run: which crictl
	I0717 22:08:08.791120  264375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:08:08.823715  264375 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 22:08:08.823783  264375 ssh_runner.go:195] Run: crio --version
	I0717 22:08:08.855867  264375 ssh_runner.go:195] Run: crio --version
	I0717 22:08:08.890722  264375 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0717 22:08:08.892307  264375 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:08:08.908399  264375 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 22:08:08.911872  264375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:08:08.922074  264375 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 22:08:08.922128  264375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:08:08.965317  264375 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 22:08:08.965391  264375 ssh_runner.go:195] Run: which lz4
	I0717 22:08:08.968745  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 22:08:08.968828  264375 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 22:08:08.971988  264375 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 22:08:08.972012  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0717 22:08:09.911479  264375 crio.go:444] Took 0.942679 seconds to copy over tarball
	I0717 22:08:09.911538  264375 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 22:08:12.110278  264375 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.19869574s)
	I0717 22:08:12.110307  264375 crio.go:451] Took 2.198803 seconds to extract the tarball
	I0717 22:08:12.110320  264375 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 22:08:12.179388  264375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:08:12.211467  264375 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 22:08:12.211491  264375 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 22:08:12.211567  264375 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:08:12.211580  264375 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 22:08:12.211609  264375 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 22:08:12.211631  264375 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 22:08:12.211667  264375 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:08:12.211589  264375 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:08:12.211749  264375 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:08:12.211740  264375 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 22:08:12.212865  264375 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:08:12.212877  264375 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:08:12.212872  264375 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:08:12.212898  264375 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:08:12.212879  264375 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 22:08:12.212884  264375 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 22:08:12.212921  264375 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 22:08:12.212951  264375 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 22:08:12.385839  264375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 22:08:12.420148  264375 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 22:08:12.420188  264375 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 22:08:12.420221  264375 ssh_runner.go:195] Run: which crictl
	I0717 22:08:12.423498  264375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 22:08:12.453868  264375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 22:08:12.535391  264375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 22:08:12.570070  264375 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0717 22:08:12.570114  264375 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 22:08:12.570168  264375 ssh_runner.go:195] Run: which crictl
	I0717 22:08:12.573406  264375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 22:08:12.580783  264375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 22:08:12.583006  264375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:08:12.584788  264375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 22:08:12.585351  264375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0717 22:08:12.597466  264375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:08:12.663704  264375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 22:08:12.679847  264375 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0717 22:08:12.679888  264375 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:08:12.679926  264375 ssh_runner.go:195] Run: which crictl
	I0717 22:08:12.730210  264375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0717 22:08:12.730265  264375 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:08:12.730300  264375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0717 22:08:12.730316  264375 ssh_runner.go:195] Run: which crictl
	I0717 22:08:12.730330  264375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0717 22:08:12.730339  264375 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 22:08:12.730351  264375 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 22:08:12.730378  264375 ssh_runner.go:195] Run: which crictl
	I0717 22:08:12.730379  264375 ssh_runner.go:195] Run: which crictl
	I0717 22:08:12.730448  264375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0717 22:08:12.730487  264375 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:08:12.730535  264375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 22:08:12.730536  264375 ssh_runner.go:195] Run: which crictl
	I0717 22:08:12.734539  264375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 22:08:12.734633  264375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 22:08:12.734698  264375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 22:08:12.736451  264375 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 22:08:12.787256  264375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 22:08:12.787375  264375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 22:08:12.794830  264375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 22:08:12.794872  264375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 22:08:12.794963  264375 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 22:08:13.058829  264375 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:08:13.193224  264375 cache_images.go:92] LoadImages completed in 981.71675ms
	W0717 22:08:13.193324  264375 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0717 22:08:13.193386  264375 ssh_runner.go:195] Run: crio config
	I0717 22:08:13.232502  264375 cni.go:84] Creating CNI manager for ""
	I0717 22:08:13.232525  264375 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 22:08:13.232537  264375 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:08:13.232560  264375 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-988346 NodeName:ingress-addon-legacy-988346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 22:08:13.232720  264375 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-988346"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:08:13.232823  264375 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-988346 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:08:13.232890  264375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 22:08:13.240864  264375 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:08:13.240931  264375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:08:13.248215  264375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0717 22:08:13.262826  264375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 22:08:13.277520  264375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 22:08:13.292301  264375 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 22:08:13.295181  264375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:08:13.304549  264375 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346 for IP: 192.168.49.2
	I0717 22:08:13.304582  264375 certs.go:190] acquiring lock for shared ca certs: {Name:mk5feafb57b96958f78245f8503644226fe57af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:13.304725  264375 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key
	I0717 22:08:13.304784  264375 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key
	I0717 22:08:13.304842  264375 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.key
	I0717 22:08:13.304860  264375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt with IP's: []
	I0717 22:08:13.448270  264375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt ...
	I0717 22:08:13.448301  264375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: {Name:mke463d0774315e6852fce7ba6c1b9762f755124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:13.448472  264375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.key ...
	I0717 22:08:13.448483  264375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.key: {Name:mk52a97e2504b600cdaf53e199a92b13f683b923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:13.448561  264375 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.key.dd3b5fb2
	I0717 22:08:13.448576  264375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 22:08:13.514549  264375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.crt.dd3b5fb2 ...
	I0717 22:08:13.514578  264375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.crt.dd3b5fb2: {Name:mkced5906d137c11c9131ace5d66f0960378350d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:13.514725  264375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.key.dd3b5fb2 ...
	I0717 22:08:13.514735  264375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.key.dd3b5fb2: {Name:mk1fd49c04a811b78c67cd77592c6b30bc633153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:13.514800  264375 certs.go:337] copying /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.crt
	I0717 22:08:13.514884  264375 certs.go:341] copying /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.key
	I0717 22:08:13.514940  264375 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.key
	I0717 22:08:13.514954  264375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.crt with IP's: []
	I0717 22:08:13.680438  264375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.crt ...
	I0717 22:08:13.680469  264375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.crt: {Name:mkddb86fa3fcf6754b79e3bf3cff08ff2d4449a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:13.680619  264375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.key ...
	I0717 22:08:13.680634  264375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.key: {Name:mkf99753a8e73143e480dbfbf00fc19dfdb8fc40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:13.680700  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 22:08:13.680716  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 22:08:13.680726  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 22:08:13.680737  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 22:08:13.680755  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:08:13.680771  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:08:13.680780  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:08:13.680794  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:08:13.680841  264375 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem (1338 bytes)
	W0717 22:08:13.680876  264375 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642_empty.pem, impossibly tiny 0 bytes
	I0717 22:08:13.680888  264375 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 22:08:13.680912  264375 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:08:13.680933  264375 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:08:13.680957  264375 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem (1679 bytes)
	I0717 22:08:13.680994  264375 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem (1708 bytes)
	I0717 22:08:13.681021  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:08:13.681034  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem -> /usr/share/ca-certificates/225642.pem
	I0717 22:08:13.681045  264375 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> /usr/share/ca-certificates/2256422.pem
	I0717 22:08:13.681622  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:08:13.702787  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 22:08:13.722958  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:08:13.743234  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 22:08:13.763312  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:08:13.783219  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 22:08:13.802639  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:08:13.822476  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:08:13.842760  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:08:13.862788  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem --> /usr/share/ca-certificates/225642.pem (1338 bytes)
	I0717 22:08:13.882185  264375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem --> /usr/share/ca-certificates/2256422.pem (1708 bytes)
	I0717 22:08:13.901746  264375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:08:13.916427  264375 ssh_runner.go:195] Run: openssl version
	I0717 22:08:13.921094  264375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/225642.pem && ln -fs /usr/share/ca-certificates/225642.pem /etc/ssl/certs/225642.pem"
	I0717 22:08:13.928753  264375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/225642.pem
	I0717 22:08:13.931925  264375 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:04 /usr/share/ca-certificates/225642.pem
	I0717 22:08:13.931968  264375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/225642.pem
	I0717 22:08:13.937696  264375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/225642.pem /etc/ssl/certs/51391683.0"
	I0717 22:08:13.945498  264375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2256422.pem && ln -fs /usr/share/ca-certificates/2256422.pem /etc/ssl/certs/2256422.pem"
	I0717 22:08:13.953562  264375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2256422.pem
	I0717 22:08:13.956533  264375 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:04 /usr/share/ca-certificates/2256422.pem
	I0717 22:08:13.956591  264375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2256422.pem
	I0717 22:08:13.962695  264375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2256422.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:08:13.970457  264375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:08:13.978155  264375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:08:13.981143  264375 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:58 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:08:13.981187  264375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:08:13.987097  264375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:08:13.994729  264375 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:08:13.997432  264375 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:08:13.997486  264375 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-988346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988346 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:08:13.997585  264375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:08:13.997638  264375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:08:14.029595  264375 cri.go:89] found id: ""
	I0717 22:08:14.029668  264375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:08:14.037167  264375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:08:14.044416  264375 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 22:08:14.044479  264375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:08:14.051571  264375 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:08:14.051621  264375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 22:08:14.093616  264375 kubeadm.go:322] W0717 22:08:14.093021    1377 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 22:08:14.131191  264375 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 22:08:14.196827  264375 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:08:16.995720  264375 kubeadm.go:322] W0717 22:08:16.995315    1377 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 22:08:16.996727  264375 kubeadm.go:322] W0717 22:08:16.996459    1377 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 22:08:25.454819  264375 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 22:08:25.454894  264375 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:08:25.454974  264375 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 22:08:25.455028  264375 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 22:08:25.455059  264375 kubeadm.go:322] OS: Linux
	I0717 22:08:25.455099  264375 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 22:08:25.455181  264375 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 22:08:25.455283  264375 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 22:08:25.455353  264375 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 22:08:25.455396  264375 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 22:08:25.455463  264375 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 22:08:25.455525  264375 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:08:25.455603  264375 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:08:25.455683  264375 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:08:25.455790  264375 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:08:25.455908  264375 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:08:25.455967  264375 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:08:25.456066  264375 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:08:25.457740  264375 out.go:204]   - Generating certificates and keys ...
	I0717 22:08:25.457833  264375 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:08:25.457930  264375 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:08:25.458017  264375 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 22:08:25.458098  264375 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 22:08:25.458186  264375 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 22:08:25.458264  264375 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 22:08:25.458340  264375 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 22:08:25.458488  264375 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-988346 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 22:08:25.458572  264375 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 22:08:25.458697  264375 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-988346 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 22:08:25.458752  264375 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 22:08:25.458807  264375 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 22:08:25.458853  264375 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 22:08:25.458908  264375 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:08:25.458974  264375 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:08:25.459060  264375 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:08:25.459128  264375 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:08:25.459185  264375 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:08:25.459248  264375 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:08:25.460474  264375 out.go:204]   - Booting up control plane ...
	I0717 22:08:25.460572  264375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:08:25.460644  264375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:08:25.460703  264375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:08:25.460794  264375 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:08:25.460966  264375 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:08:25.461069  264375 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002457 seconds
	I0717 22:08:25.461224  264375 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:08:25.461354  264375 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:08:25.461411  264375 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:08:25.461522  264375 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-988346 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 22:08:25.461613  264375 kubeadm.go:322] [bootstrap-token] Using token: nkhsj3.wmcul5k6aesh5d03
	I0717 22:08:25.463099  264375 out.go:204]   - Configuring RBAC rules ...
	I0717 22:08:25.463207  264375 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:08:25.463296  264375 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:08:25.463456  264375 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:08:25.463579  264375 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:08:25.463746  264375 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:08:25.463856  264375 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:08:25.463961  264375 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:08:25.464022  264375 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:08:25.464060  264375 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:08:25.464066  264375 kubeadm.go:322] 
	I0717 22:08:25.464116  264375 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:08:25.464122  264375 kubeadm.go:322] 
	I0717 22:08:25.464186  264375 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:08:25.464192  264375 kubeadm.go:322] 
	I0717 22:08:25.464215  264375 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:08:25.464268  264375 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:08:25.464314  264375 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:08:25.464319  264375 kubeadm.go:322] 
	I0717 22:08:25.464365  264375 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:08:25.464427  264375 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:08:25.464483  264375 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:08:25.464488  264375 kubeadm.go:322] 
	I0717 22:08:25.464558  264375 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:08:25.464621  264375 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:08:25.464626  264375 kubeadm.go:322] 
	I0717 22:08:25.464693  264375 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nkhsj3.wmcul5k6aesh5d03 \
	I0717 22:08:25.464779  264375 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 \
	I0717 22:08:25.464803  264375 kubeadm.go:322]     --control-plane 
	I0717 22:08:25.464809  264375 kubeadm.go:322] 
	I0717 22:08:25.464886  264375 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:08:25.464894  264375 kubeadm.go:322] 
	I0717 22:08:25.464968  264375 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nkhsj3.wmcul5k6aesh5d03 \
	I0717 22:08:25.465061  264375 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 
	I0717 22:08:25.465078  264375 cni.go:84] Creating CNI manager for ""
	I0717 22:08:25.465090  264375 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 22:08:25.466577  264375 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 22:08:25.467942  264375 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:08:25.471859  264375 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0717 22:08:25.471878  264375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:08:25.487810  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:08:25.923654  264375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:08:25.923815  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:25.923910  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=ingress-addon-legacy-988346 minikube.k8s.io/updated_at=2023_07_17T22_08_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:26.068870  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:26.068875  264375 ops.go:34] apiserver oom_adj: -16
	I0717 22:08:26.637863  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:27.137372  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:27.637823  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:28.137428  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:28.637550  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:29.138057  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:29.638152  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:30.138209  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:30.638239  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:31.137875  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:31.637248  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:32.137679  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:32.637942  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:33.138364  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:33.637417  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:34.137417  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:34.637839  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:35.137449  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:35.638211  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:36.137468  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:36.637831  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:37.138208  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:37.638265  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:38.137519  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:38.637336  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:39.138203  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:39.637438  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:40.138293  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:40.637598  264375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:08:40.704408  264375 kubeadm.go:1081] duration metric: took 14.780634834s to wait for elevateKubeSystemPrivileges.
	I0717 22:08:40.704449  264375 kubeadm.go:406] StartCluster complete in 26.706965404s
	I0717 22:08:40.704480  264375 settings.go:142] acquiring lock: {Name:mkd04bbc59ef11ead8108410e404fcf464b56f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:40.704544  264375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:08:40.705283  264375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/kubeconfig: {Name:mkbb3c2ee0d4a9dc4a5c436ca7b4ee88dbc131b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:08:40.705509  264375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:08:40.705701  264375 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:08:40.705805  264375 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-988346"
	I0717 22:08:40.705830  264375 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-988346"
	I0717 22:08:40.705841  264375 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-988346"
	I0717 22:08:40.705854  264375 config.go:182] Loaded profile config "ingress-addon-legacy-988346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 22:08:40.705870  264375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-988346"
	I0717 22:08:40.705883  264375 host.go:66] Checking if "ingress-addon-legacy-988346" exists ...
	I0717 22:08:40.706260  264375 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988346 --format={{.State.Status}}
	I0717 22:08:40.706233  264375 kapi.go:59] client config for ingress-addon-legacy-988346: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.key", CAFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:08:40.706460  264375 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988346 --format={{.State.Status}}
	I0717 22:08:40.707202  264375 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 22:08:40.725955  264375 kapi.go:59] client config for ingress-addon-legacy-988346: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.key", CAFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:08:40.732241  264375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:08:40.730649  264375 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-988346"
	I0717 22:08:40.733562  264375 host.go:66] Checking if "ingress-addon-legacy-988346" exists ...
	I0717 22:08:40.733717  264375 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:08:40.733748  264375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:08:40.733810  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:40.734105  264375 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988346 --format={{.State.Status}}
	I0717 22:08:40.752137  264375 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:08:40.752167  264375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:08:40.752258  264375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988346
	I0717 22:08:40.752306  264375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa Username:docker}
	I0717 22:08:40.773762  264375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/ingress-addon-legacy-988346/id_rsa Username:docker}
	I0717 22:08:40.870782  264375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:08:40.979441  264375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:08:41.083474  264375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:08:41.261462  264375 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-988346" context rescaled to 1 replicas
	I0717 22:08:41.261526  264375 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:08:41.264521  264375 out.go:177] * Verifying Kubernetes components...
	I0717 22:08:41.266140  264375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:08:41.375645  264375 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 22:08:41.499734  264375 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 22:08:41.501151  264375 addons.go:502] enable addons completed in 795.463633ms: enabled=[storage-provisioner default-storageclass]
	I0717 22:08:41.498942  264375 kapi.go:59] client config for ingress-addon-legacy-988346: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.key", CAFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:08:41.501487  264375 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-988346" to be "Ready" ...
	I0717 22:08:43.508468  264375 node_ready.go:58] node "ingress-addon-legacy-988346" has status "Ready":"False"
	I0717 22:08:46.197498  264375 node_ready.go:58] node "ingress-addon-legacy-988346" has status "Ready":"False"
	I0717 22:08:48.508003  264375 node_ready.go:58] node "ingress-addon-legacy-988346" has status "Ready":"False"
	I0717 22:08:50.508238  264375 node_ready.go:58] node "ingress-addon-legacy-988346" has status "Ready":"False"
	I0717 22:08:52.508523  264375 node_ready.go:58] node "ingress-addon-legacy-988346" has status "Ready":"False"
	I0717 22:08:55.007316  264375 node_ready.go:58] node "ingress-addon-legacy-988346" has status "Ready":"False"
	I0717 22:08:56.008490  264375 node_ready.go:49] node "ingress-addon-legacy-988346" has status "Ready":"True"
	I0717 22:08:56.008521  264375 node_ready.go:38] duration metric: took 14.507011567s waiting for node "ingress-addon-legacy-988346" to be "Ready" ...
	I0717 22:08:56.008532  264375 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:08:56.016333  264375 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-knf7l" in "kube-system" namespace to be "Ready" ...
	I0717 22:08:58.022326  264375 pod_ready.go:102] pod "coredns-66bff467f8-knf7l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 22:08:40 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 22:09:00.024406  264375 pod_ready.go:102] pod "coredns-66bff467f8-knf7l" in "kube-system" namespace has status "Ready":"False"
	I0717 22:09:02.524717  264375 pod_ready.go:102] pod "coredns-66bff467f8-knf7l" in "kube-system" namespace has status "Ready":"False"
	I0717 22:09:05.023727  264375 pod_ready.go:92] pod "coredns-66bff467f8-knf7l" in "kube-system" namespace has status "Ready":"True"
	I0717 22:09:05.023752  264375 pod_ready.go:81] duration metric: took 9.007390834s waiting for pod "coredns-66bff467f8-knf7l" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.023762  264375 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-988346" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.027699  264375 pod_ready.go:92] pod "etcd-ingress-addon-legacy-988346" in "kube-system" namespace has status "Ready":"True"
	I0717 22:09:05.027719  264375 pod_ready.go:81] duration metric: took 3.951195ms waiting for pod "etcd-ingress-addon-legacy-988346" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.027730  264375 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-988346" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.031278  264375 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-988346" in "kube-system" namespace has status "Ready":"True"
	I0717 22:09:05.031300  264375 pod_ready.go:81] duration metric: took 3.559339ms waiting for pod "kube-apiserver-ingress-addon-legacy-988346" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.031308  264375 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-988346" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.034989  264375 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-988346" in "kube-system" namespace has status "Ready":"True"
	I0717 22:09:05.035007  264375 pod_ready.go:81] duration metric: took 3.693398ms waiting for pod "kube-controller-manager-ingress-addon-legacy-988346" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.035016  264375 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5d8nr" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.038473  264375 pod_ready.go:92] pod "kube-proxy-5d8nr" in "kube-system" namespace has status "Ready":"True"
	I0717 22:09:05.038493  264375 pod_ready.go:81] duration metric: took 3.471892ms waiting for pod "kube-proxy-5d8nr" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.038500  264375 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-988346" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.219902  264375 request.go:628] Waited for 181.328723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-988346
	I0717 22:09:05.419842  264375 request.go:628] Waited for 197.357575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-988346
	I0717 22:09:05.422490  264375 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-988346" in "kube-system" namespace has status "Ready":"True"
	I0717 22:09:05.422513  264375 pod_ready.go:81] duration metric: took 384.004249ms waiting for pod "kube-scheduler-ingress-addon-legacy-988346" in "kube-system" namespace to be "Ready" ...
	I0717 22:09:05.422524  264375 pod_ready.go:38] duration metric: took 9.413967342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:09:05.422541  264375 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:09:05.422599  264375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:09:05.433412  264375 api_server.go:72] duration metric: took 24.171827217s to wait for apiserver process to appear ...
	I0717 22:09:05.433433  264375 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:09:05.433449  264375 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 22:09:05.439184  264375 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 22:09:05.440010  264375 api_server.go:141] control plane version: v1.18.20
	I0717 22:09:05.440035  264375 api_server.go:131] duration metric: took 6.596836ms to wait for apiserver health ...
	I0717 22:09:05.440045  264375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:09:05.619543  264375 request.go:628] Waited for 179.412784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:09:05.624461  264375 system_pods.go:59] 8 kube-system pods found
	I0717 22:09:05.624497  264375 system_pods.go:61] "coredns-66bff467f8-knf7l" [4c37ac9e-d750-446e-b56c-fc71ab6f22da] Running
	I0717 22:09:05.624504  264375 system_pods.go:61] "etcd-ingress-addon-legacy-988346" [7000f7f6-5f50-4476-a80c-7c97fe9fbc7c] Running
	I0717 22:09:05.624507  264375 system_pods.go:61] "kindnet-9rd67" [2628aa1b-4a1e-438e-aa87-8aef31014ed7] Running
	I0717 22:09:05.624511  264375 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-988346" [ef905d5a-af80-46e2-8fd0-fd95ce8426bb] Running
	I0717 22:09:05.624518  264375 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-988346" [e09550e6-02ae-481e-827c-5ad41a5102ed] Running
	I0717 22:09:05.624524  264375 system_pods.go:61] "kube-proxy-5d8nr" [a4dacbee-196e-4783-8f7c-b26f2888bd70] Running
	I0717 22:09:05.624528  264375 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-988346" [32f4cc85-bfd8-4dfb-8045-5506e4186e91] Running
	I0717 22:09:05.624532  264375 system_pods.go:61] "storage-provisioner" [aae24bb1-248f-4b27-b6ae-f2a9b8fb92d6] Running
	I0717 22:09:05.624537  264375 system_pods.go:74] duration metric: took 184.487974ms to wait for pod list to return data ...
	I0717 22:09:05.624545  264375 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:09:05.819983  264375 request.go:628] Waited for 195.355019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 22:09:05.822453  264375 default_sa.go:45] found service account: "default"
	I0717 22:09:05.822477  264375 default_sa.go:55] duration metric: took 197.923756ms for default service account to be created ...
	I0717 22:09:05.822486  264375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:09:06.019907  264375 request.go:628] Waited for 197.348261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:09:06.026385  264375 system_pods.go:86] 8 kube-system pods found
	I0717 22:09:06.026419  264375 system_pods.go:89] "coredns-66bff467f8-knf7l" [4c37ac9e-d750-446e-b56c-fc71ab6f22da] Running
	I0717 22:09:06.026428  264375 system_pods.go:89] "etcd-ingress-addon-legacy-988346" [7000f7f6-5f50-4476-a80c-7c97fe9fbc7c] Running
	I0717 22:09:06.026435  264375 system_pods.go:89] "kindnet-9rd67" [2628aa1b-4a1e-438e-aa87-8aef31014ed7] Running
	I0717 22:09:06.026447  264375 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-988346" [ef905d5a-af80-46e2-8fd0-fd95ce8426bb] Running
	I0717 22:09:06.026455  264375 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-988346" [e09550e6-02ae-481e-827c-5ad41a5102ed] Running
	I0717 22:09:06.026465  264375 system_pods.go:89] "kube-proxy-5d8nr" [a4dacbee-196e-4783-8f7c-b26f2888bd70] Running
	I0717 22:09:06.026470  264375 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-988346" [32f4cc85-bfd8-4dfb-8045-5506e4186e91] Running
	I0717 22:09:06.026478  264375 system_pods.go:89] "storage-provisioner" [aae24bb1-248f-4b27-b6ae-f2a9b8fb92d6] Running
	I0717 22:09:06.026491  264375 system_pods.go:126] duration metric: took 203.999295ms to wait for k8s-apps to be running ...
	I0717 22:09:06.026500  264375 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:09:06.026554  264375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:09:06.038734  264375 system_svc.go:56] duration metric: took 12.220229ms WaitForService to wait for kubelet.
	I0717 22:09:06.038765  264375 kubeadm.go:581] duration metric: took 24.777182283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:09:06.038792  264375 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:09:06.219185  264375 request.go:628] Waited for 180.27851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0717 22:09:06.221992  264375 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 22:09:06.222016  264375 node_conditions.go:123] node cpu capacity is 8
	I0717 22:09:06.222029  264375 node_conditions.go:105] duration metric: took 183.231565ms to run NodePressure ...
	I0717 22:09:06.222039  264375 start.go:228] waiting for startup goroutines ...
	I0717 22:09:06.222045  264375 start.go:233] waiting for cluster config update ...
	I0717 22:09:06.222067  264375 start.go:242] writing updated cluster config ...
	I0717 22:09:06.222397  264375 ssh_runner.go:195] Run: rm -f paused
	I0717 22:09:06.268685  264375 start.go:578] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0717 22:09:06.270721  264375 out.go:177] 
	W0717 22:09:06.272523  264375 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0717 22:09:06.273970  264375 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0717 22:09:06.275443  264375 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-988346" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 22:11:59 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:11:59.333717124Z" level=info msg="Started container" PID=4773 containerID=e132baf20a7b41c4ae403b5ea44ae5fd6a782fbccb3a26e0d74cab2a687eb278 description=default/hello-world-app-5f5d8b66bb-tdkpp/hello-world-app id=3cd8c998-d8d2-47ca-968f-202fb594102b name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=d22ecd3b55c6f69a24843e19414a4c53e12916dcae4185a7ba66427dd9a0904b
	Jul 17 22:12:03 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:03.662578227Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=149bc157-de52-4637-abac-aea165a59b5a name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 17 22:12:13 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:13.663367282Z" level=info msg="Stopping pod sandbox: b8fbbada1b850bb10e97c0ba528cfb3db75623bf0c4c413bed2a77dc5a763d34" id=e7a266c7-187a-4a99-b3e4-da34666759c1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:13 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:13.664539098Z" level=info msg="Stopped pod sandbox: b8fbbada1b850bb10e97c0ba528cfb3db75623bf0c4c413bed2a77dc5a763d34" id=e7a266c7-187a-4a99-b3e4-da34666759c1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:14 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:14.043735596Z" level=info msg="Stopping pod sandbox: b8fbbada1b850bb10e97c0ba528cfb3db75623bf0c4c413bed2a77dc5a763d34" id=bebc84c5-685a-49de-980c-07ea21763c23 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:14 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:14.043791101Z" level=info msg="Stopped pod sandbox (already stopped): b8fbbada1b850bb10e97c0ba528cfb3db75623bf0c4c413bed2a77dc5a763d34" id=bebc84c5-685a-49de-980c-07ea21763c23 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:14 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:14.783227385Z" level=info msg="Stopping container: a159cf05abc5498a06bd2b42f2acd587715ad53dfad307b8c7636c4bd4fd55ab (timeout: 2s)" id=aaea9037-694b-4190-b146-7de6d4ebd175 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 22:12:14 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:14.785591963Z" level=info msg="Stopping container: a159cf05abc5498a06bd2b42f2acd587715ad53dfad307b8c7636c4bd4fd55ab (timeout: 2s)" id=2e9c0944-1a1b-40c2-a29b-07e7d910b85f name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 22:12:15 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:15.662301374Z" level=info msg="Stopping pod sandbox: b8fbbada1b850bb10e97c0ba528cfb3db75623bf0c4c413bed2a77dc5a763d34" id=8eed17e5-f87f-4114-9bf8-3918dd2a6eea name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:15 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:15.662362253Z" level=info msg="Stopped pod sandbox (already stopped): b8fbbada1b850bb10e97c0ba528cfb3db75623bf0c4c413bed2a77dc5a763d34" id=8eed17e5-f87f-4114-9bf8-3918dd2a6eea name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.792664314Z" level=warning msg="Stopping container a159cf05abc5498a06bd2b42f2acd587715ad53dfad307b8c7636c4bd4fd55ab with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=aaea9037-694b-4190-b146-7de6d4ebd175 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 22:12:16 ingress-addon-legacy-988346 conmon[3425]: conmon a159cf05abc5498a06bd <ninfo>: container 3437 exited with status 137
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.958752154Z" level=info msg="Stopped container a159cf05abc5498a06bd2b42f2acd587715ad53dfad307b8c7636c4bd4fd55ab: ingress-nginx/ingress-nginx-controller-7fcf777cb7-mgkqc/controller" id=aaea9037-694b-4190-b146-7de6d4ebd175 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.958776411Z" level=info msg="Stopped container a159cf05abc5498a06bd2b42f2acd587715ad53dfad307b8c7636c4bd4fd55ab: ingress-nginx/ingress-nginx-controller-7fcf777cb7-mgkqc/controller" id=2e9c0944-1a1b-40c2-a29b-07e7d910b85f name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.959469130Z" level=info msg="Stopping pod sandbox: d5bb025e1e2eb650c1f1d04d4f68be5e0bce94b83c4390f18297b690d85eee73" id=c6327180-0b23-4939-b639-82a7d5c66278 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.959479257Z" level=info msg="Stopping pod sandbox: d5bb025e1e2eb650c1f1d04d4f68be5e0bce94b83c4390f18297b690d85eee73" id=f238dc58-aa58-4498-a241-919b6a77e0d1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.962330559Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-RWI67SD3VRV65BUR - [0:0]\n:KUBE-HP-BZKFGJ4ZJY5BACJO - [0:0]\n-X KUBE-HP-BZKFGJ4ZJY5BACJO\n-X KUBE-HP-RWI67SD3VRV65BUR\nCOMMIT\n"
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.963752374Z" level=info msg="Closing host port tcp:80"
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.963790793Z" level=info msg="Closing host port tcp:443"
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.964793827Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.964816317Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.964950955Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-mgkqc Namespace:ingress-nginx ID:d5bb025e1e2eb650c1f1d04d4f68be5e0bce94b83c4390f18297b690d85eee73 UID:2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b NetNS:/var/run/netns/1c07039b-e238-49f8-9af1-20ab0542fee8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 22:12:16 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:16.965062442Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-mgkqc from CNI network \"kindnet\" (type=ptp)"
	Jul 17 22:12:17 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:17.008951469Z" level=info msg="Stopped pod sandbox: d5bb025e1e2eb650c1f1d04d4f68be5e0bce94b83c4390f18297b690d85eee73" id=c6327180-0b23-4939-b639-82a7d5c66278 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 22:12:17 ingress-addon-legacy-988346 crio[964]: time="2023-07-17 22:12:17.009077385Z" level=info msg="Stopped pod sandbox (already stopped): d5bb025e1e2eb650c1f1d04d4f68be5e0bce94b83c4390f18297b690d85eee73" id=f238dc58-aa58-4498-a241-919b6a77e0d1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e132baf20a7b4       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            23 seconds ago      Running             hello-world-app           0                   d22ecd3b55c6f       hello-world-app-5f5d8b66bb-tdkpp
	50039a403dd23       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   6d2c64bdd11c2       nginx
	a159cf05abc54       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   d5bb025e1e2eb       ingress-nginx-controller-7fcf777cb7-mgkqc
	01cd39d8d87db       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   f7ea9fa681c41       ingress-nginx-admission-patch-d429k
	ddaf9d4f8340d       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   1abc5abfef34c       ingress-nginx-admission-create-gmlpf
	38f9ef85c89b9       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   0e2a6c39d12ca       coredns-66bff467f8-knf7l
	b1d7cdbbfeda8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   0ca898bd5998f       storage-provisioner
	c0fc3b2c48a6d       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                 3 minutes ago       Running             kindnet-cni               0                   b1f14f1eb64fa       kindnet-9rd67
	417a4a9b95a01       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   01bcc2f2af199       kube-proxy-5d8nr
	3b0ad544ffa97       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   11ad7a808ccd5       kube-apiserver-ingress-addon-legacy-988346
	1e573dcb530fb       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   be1aa65c2445e       etcd-ingress-addon-legacy-988346
	0cbce1bebb2de       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   97cfd4739bc87       kube-controller-manager-ingress-addon-legacy-988346
	814fd588f62ad       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   ce69df616b3ea       kube-scheduler-ingress-addon-legacy-988346
	
	* 
	* ==> coredns [38f9ef85c89b994216b5c6eecd5fb3e2ea9078d0cb3247815e92b692306803d2] <==
	* [INFO] 10.244.0.5:54604 - 22413 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004134973s
	[INFO] 10.244.0.5:53604 - 54393 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003851781s
	[INFO] 10.244.0.5:56976 - 48409 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004019297s
	[INFO] 10.244.0.5:54604 - 44849 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00386721s
	[INFO] 10.244.0.5:54148 - 61810 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003960265s
	[INFO] 10.244.0.5:44537 - 38666 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004023406s
	[INFO] 10.244.0.5:39588 - 49152 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003964223s
	[INFO] 10.244.0.5:39001 - 43526 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004291495s
	[INFO] 10.244.0.5:54259 - 58507 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003708849s
	[INFO] 10.244.0.5:39001 - 43862 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003767152s
	[INFO] 10.244.0.5:54604 - 11545 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003883343s
	[INFO] 10.244.0.5:54259 - 12960 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003651649s
	[INFO] 10.244.0.5:53604 - 61340 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004156279s
	[INFO] 10.244.0.5:44537 - 32331 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004045408s
	[INFO] 10.244.0.5:54148 - 45040 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004004335s
	[INFO] 10.244.0.5:39588 - 39112 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003957735s
	[INFO] 10.244.0.5:39001 - 18238 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072026s
	[INFO] 10.244.0.5:54604 - 31102 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060241s
	[INFO] 10.244.0.5:53604 - 39302 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050533s
	[INFO] 10.244.0.5:54259 - 26903 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000141779s
	[INFO] 10.244.0.5:44537 - 30321 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00016475s
	[INFO] 10.244.0.5:56976 - 14643 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004269642s
	[INFO] 10.244.0.5:39588 - 47044 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000208221s
	[INFO] 10.244.0.5:54148 - 1609 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00022076s
	[INFO] 10.244.0.5:56976 - 58049 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068858s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-988346
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-988346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=ingress-addon-legacy-988346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_08_25_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:08:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-988346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:12:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:09:55 +0000   Mon, 17 Jul 2023 22:08:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:09:55 +0000   Mon, 17 Jul 2023 22:08:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:09:55 +0000   Mon, 17 Jul 2023 22:08:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:09:55 +0000   Mon, 17 Jul 2023 22:08:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-988346
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9e3177aeed3448ea7f9250c1125cf8a
	  System UUID:                e6215a54-d211-4d30-b473-78a1f4210f41
	  Boot ID:                    7db0a284-d4e9-48b4-92fc-f96afb04e8db
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-tdkpp                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-knf7l                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m42s
	  kube-system                 etcd-ingress-addon-legacy-988346                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kindnet-9rd67                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m42s
	  kube-system                 kube-apiserver-ingress-addon-legacy-988346             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-988346    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-proxy-5d8nr                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-scheduler-ingress-addon-legacy-988346             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m5s (x4 over 4m5s)  kubelet     Node ingress-addon-legacy-988346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x4 over 4m5s)  kubelet     Node ingress-addon-legacy-988346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x4 over 4m5s)  kubelet     Node ingress-addon-legacy-988346 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m57s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m57s                kubelet     Node ingress-addon-legacy-988346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s                kubelet     Node ingress-addon-legacy-988346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s                kubelet     Node ingress-addon-legacy-988346 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m41s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m27s                kubelet     Node ingress-addon-legacy-988346 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007355] FS-Cache: O-key=[8] 'ffa00f0200000000'
	[  +0.004920] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006588] FS-Cache: N-cookie d=00000000d2af0321{9p.inode} n=0000000037715637
	[  +0.007353] FS-Cache: N-key=[8] 'ffa00f0200000000'
	[  +3.061117] FS-Cache: Duplicate cookie detected
	[  +0.004859] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006901] FS-Cache: O-cookie d=00000000313c8b61{9P.session} n=00000000be6062ae
	[  +0.007702] FS-Cache: O-key=[10] '34323936353335363438'
	[  +0.006739] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007821] FS-Cache: N-cookie d=00000000313c8b61{9P.session} n=000000006adef14a
	[  +0.008922] FS-Cache: N-key=[10] '34323936353335363438'
	[Jul17 22:09] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[  +1.031915] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[  +2.015858] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[  +4.255723] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[Jul17 22:10] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[ +16.130862] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[ +32.505735] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	
	* 
	* ==> etcd [1e573dcb530fb547c1959217a1798c5c74aceaba2bd2c5e17f22a53f5a4961b7] <==
	* raft2023/07/17 22:08:18 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 22:08:18.561635 W | auth: simple token is not cryptographically signed
	2023-07-17 22:08:18.564343 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-17 22:08:18.564689 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/07/17 22:08:18 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 22:08:18.565119 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-07-17 22:08:18.567223 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 22:08:18.567459 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 22:08:18.567572 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/07/17 22:08:19 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/17 22:08:19 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/17 22:08:19 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/17 22:08:19 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/17 22:08:19 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-17 22:08:19.400528 I | embed: ready to serve client requests
	2023-07-17 22:08:19.400582 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-17 22:08:19.400658 I | etcdserver: published {Name:ingress-addon-legacy-988346 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-17 22:08:19.400717 I | embed: ready to serve client requests
	2023-07-17 22:08:19.401517 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-17 22:08:19.401677 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-17 22:08:19.402985 I | embed: serving client requests on 192.168.49.2:2379
	2023-07-17 22:08:19.403053 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 22:08:45.954686 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-ingress-addon-legacy-988346\" " with result "range_response_count:1 size:6682" took too long (177.506264ms) to execute
	2023-07-17 22:08:46.195755 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-ingress-addon-legacy-988346\" " with result "range_response_count:1 size:3909" took too long (233.852353ms) to execute
	2023-07-17 22:08:46.195901 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-988346\" " with result "range_response_count:1 size:6604" took too long (189.347236ms) to execute
	
	* 
	* ==> kernel <==
	*  22:12:22 up  1:54,  0 users,  load average: 0.11, 0.55, 0.81
	Linux ingress-addon-legacy-988346 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [c0fc3b2c48a6db3c0eaefa1697863a2f1a1145341abf916fda4519c8618dc197] <==
	* I0717 22:10:17.529155       1 main.go:227] handling current node
	I0717 22:10:27.541132       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:10:27.541160       1 main.go:227] handling current node
	I0717 22:10:37.546964       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:10:37.546988       1 main.go:227] handling current node
	I0717 22:10:47.559037       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:10:47.559061       1 main.go:227] handling current node
	I0717 22:10:57.562835       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:10:57.562863       1 main.go:227] handling current node
	I0717 22:11:07.574466       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:11:07.574490       1 main.go:227] handling current node
	I0717 22:11:17.578093       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:11:17.578118       1 main.go:227] handling current node
	I0717 22:11:27.590279       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:11:27.590303       1 main.go:227] handling current node
	I0717 22:11:37.593545       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:11:37.593570       1 main.go:227] handling current node
	I0717 22:11:47.597288       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:11:47.597317       1 main.go:227] handling current node
	I0717 22:11:57.602672       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:11:57.602702       1 main.go:227] handling current node
	I0717 22:12:07.612632       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:12:07.612654       1 main.go:227] handling current node
	I0717 22:12:17.624740       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 22:12:17.624766       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [3b0ad544ffa9774aa1dc6b07c6cf3cd282df012b69c697ca583ebe024e16dbe3] <==
	* I0717 22:08:22.513669       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0717 22:08:22.523205       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0717 22:08:22.612900       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0717 22:08:22.612900       1 cache.go:39] Caches are synced for autoregister controller
	I0717 22:08:22.612905       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 22:08:22.613492       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0717 22:08:22.613580       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 22:08:23.512109       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0717 22:08:23.512146       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 22:08:23.517123       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0717 22:08:23.519927       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0717 22:08:23.519948       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0717 22:08:23.794063       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 22:08:23.820928       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0717 22:08:23.893921       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0717 22:08:23.894932       1 controller.go:609] quota admission added evaluator for: endpoints
	I0717 22:08:23.900032       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 22:08:24.826336       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0717 22:08:25.265804       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0717 22:08:25.443074       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0717 22:08:25.587105       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 22:08:40.278543       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0717 22:08:40.464353       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0717 22:09:06.899518       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0717 22:09:35.391742       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [0cbce1bebb2de8475a43f625b6b8ae765d90dad5a578a6bb024a27df0ba3940a] <==
	* I0717 22:08:40.527583       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	E0717 22:08:40.569806       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0717 22:08:40.570420       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0717 22:08:40.624706       1 shared_informer.go:230] Caches are synced for stateful set 
	I0717 22:08:40.674780       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0717 22:08:40.679787       1 shared_informer.go:230] Caches are synced for disruption 
	I0717 22:08:40.679809       1 disruption.go:339] Sending events to api server.
	I0717 22:08:40.729430       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"46d64aee-f038-4981-8e20-ea1e03e1bdc5", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0717 22:08:40.737497       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f88a9417-a015-4bbc-b289-459e70f1753b", APIVersion:"apps/v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-lz66d
	I0717 22:08:40.781454       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 22:08:40.826855       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 22:08:40.859698       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 22:08:40.859729       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 22:08:40.862441       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0717 22:08:40.880763       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 22:09:00.325939       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0717 22:09:06.893114       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"33a12abd-e550-4fc4-b1fe-002f02f6e894", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0717 22:09:06.902776       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f9847244-9196-47c1-a0d4-cbe9eaa56a35", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-mgkqc
	I0717 22:09:06.905988       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3e90e1b4-f3dc-408a-bd31-4f24da2683b8", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-gmlpf
	I0717 22:09:06.978305       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ad9c1d42-db6c-4373-bd32-b02783774718", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-d429k
	I0717 22:09:11.739075       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3e90e1b4-f3dc-408a-bd31-4f24da2683b8", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 22:09:12.740607       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ad9c1d42-db6c-4373-bd32-b02783774718", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 22:11:56.692913       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"1fb6b344-4bc3-42cc-8012-b42b520fda34", APIVersion:"apps/v1", ResourceVersion:"720", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0717 22:11:56.698348       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"66cc2970-2d65-4f2e-930f-72ed4e80c2f4", APIVersion:"apps/v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-tdkpp
	E0717 22:12:19.564348       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-dx4dt" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [417a4a9b95a01b3085685c8e1224b4a3a157925a716fb552feec417eb600a56e] <==
	* W0717 22:08:41.296341       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0717 22:08:41.360865       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0717 22:08:41.360912       1 server_others.go:186] Using iptables Proxier.
	I0717 22:08:41.361203       1 server.go:583] Version: v1.18.20
	I0717 22:08:41.361805       1 config.go:133] Starting endpoints config controller
	I0717 22:08:41.361890       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0717 22:08:41.361988       1 config.go:315] Starting service config controller
	I0717 22:08:41.362022       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0717 22:08:41.462056       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0717 22:08:41.462250       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [814fd588f62adcd4e15e4a05288cdf84976082b3343ce8434a5e9b4d17ced7d1] <==
	* I0717 22:08:22.573718       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 22:08:22.575398       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:08:22.575449       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:08:22.575956       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0717 22:08:22.576015       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 22:08:22.576754       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:08:22.577074       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:08:22.577492       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:08:22.578175       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:08:22.578317       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:08:22.578458       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 22:08:22.578545       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:08:22.578566       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:08:22.578580       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:08:22.578590       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:08:22.578683       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:08:22.578691       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 22:08:23.419431       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:08:23.498821       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 22:08:23.499956       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:08:23.599709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:08:23.624220       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:08:23.703924       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 22:08:23.975646       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0717 22:08:40.307625       1 factory.go:503] pod: kube-system/coredns-66bff467f8-lz66d is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Jul 17 22:11:41 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:11:41.663122    1882 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 22:11:41 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:11:41.663156    1882 pod_workers.go:191] Error syncing pod 9935ac20-bbd4-4098-9127-d8c5a77daa3d ("kube-ingress-dns-minikube_kube-system(9935ac20-bbd4-4098-9127-d8c5a77daa3d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 17 22:11:52 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:11:52.663119    1882 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 22:11:52 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:11:52.663163    1882 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 22:11:52 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:11:52.663211    1882 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 22:11:52 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:11:52.663239    1882 pod_workers.go:191] Error syncing pod 9935ac20-bbd4-4098-9127-d8c5a77daa3d ("kube-ingress-dns-minikube_kube-system(9935ac20-bbd4-4098-9127-d8c5a77daa3d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 17 22:11:56 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:11:56.703111    1882 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jul 17 22:11:56 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:11:56.893185    1882 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-nxxrn" (UniqueName: "kubernetes.io/secret/f7f1deba-90c6-4088-b800-434c2e9f8d49-default-token-nxxrn") pod "hello-world-app-5f5d8b66bb-tdkpp" (UID: "f7f1deba-90c6-4088-b800-434c2e9f8d49")
	Jul 17 22:11:57 ingress-addon-legacy-988346 kubelet[1882]: W0717 22:11:57.092600    1882 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/729d2601f7f6e059fdfa392033cee4e1789686d5427c7c315ae507ef6dee9dd3/crio-d22ecd3b55c6f69a24843e19414a4c53e12916dcae4185a7ba66427dd9a0904b WatchSource:0}: Error finding container d22ecd3b55c6f69a24843e19414a4c53e12916dcae4185a7ba66427dd9a0904b: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000bbb8c0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jul 17 22:12:03 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:12:03.663006    1882 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 22:12:03 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:12:03.663062    1882 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 22:12:03 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:12:03.663125    1882 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 22:12:03 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:12:03.663162    1882 pod_workers.go:191] Error syncing pod 9935ac20-bbd4-4098-9127-d8c5a77daa3d ("kube-ingress-dns-minikube_kube-system(9935ac20-bbd4-4098-9127-d8c5a77daa3d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 17 22:12:12 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:12.533802    1882 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-5ns6n" (UniqueName: "kubernetes.io/secret/9935ac20-bbd4-4098-9127-d8c5a77daa3d-minikube-ingress-dns-token-5ns6n") pod "9935ac20-bbd4-4098-9127-d8c5a77daa3d" (UID: "9935ac20-bbd4-4098-9127-d8c5a77daa3d")
	Jul 17 22:12:12 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:12.535754    1882 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9935ac20-bbd4-4098-9127-d8c5a77daa3d-minikube-ingress-dns-token-5ns6n" (OuterVolumeSpecName: "minikube-ingress-dns-token-5ns6n") pod "9935ac20-bbd4-4098-9127-d8c5a77daa3d" (UID: "9935ac20-bbd4-4098-9127-d8c5a77daa3d"). InnerVolumeSpecName "minikube-ingress-dns-token-5ns6n". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 22:12:12 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:12.634169    1882 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-5ns6n" (UniqueName: "kubernetes.io/secret/9935ac20-bbd4-4098-9127-d8c5a77daa3d-minikube-ingress-dns-token-5ns6n") on node "ingress-addon-legacy-988346" DevicePath ""
	Jul 17 22:12:14 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:12:14.784305    1882 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-mgkqc.1772c74ae9e30b5b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-mgkqc", UID:"2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b", APIVersion:"v1", ResourceVersion:"487", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-988346"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1258d0faea91f5b, ext:229548846756, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1258d0faea91f5b, ext:229548846756, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-mgkqc.1772c74ae9e30b5b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 22:12:14 ingress-addon-legacy-988346 kubelet[1882]: E0717 22:12:14.787963    1882 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-mgkqc.1772c74ae9e30b5b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-mgkqc", UID:"2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b", APIVersion:"v1", ResourceVersion:"487", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-988346"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1258d0faea91f5b, ext:229548846756, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1258d0faecea5f8, ext:229551306055, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-mgkqc.1772c74ae9e30b5b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 22:12:17 ingress-addon-legacy-988346 kubelet[1882]: W0717 22:12:17.038707    1882 pod_container_deletor.go:77] Container "d5bb025e1e2eb650c1f1d04d4f68be5e0bce94b83c4390f18297b690d85eee73" not found in pod's containers
	Jul 17 22:12:18 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:18.969499    1882 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b-webhook-cert") pod "2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b" (UID: "2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b")
	Jul 17 22:12:18 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:18.969555    1882 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-gwzpf" (UniqueName: "kubernetes.io/secret/2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b-ingress-nginx-token-gwzpf") pod "2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b" (UID: "2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b")
	Jul 17 22:12:18 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:18.971516    1882 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b" (UID: "2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 22:12:18 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:18.971736    1882 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b-ingress-nginx-token-gwzpf" (OuterVolumeSpecName: "ingress-nginx-token-gwzpf") pod "2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b" (UID: "2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b"). InnerVolumeSpecName "ingress-nginx-token-gwzpf". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 22:12:19 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:19.069878    1882 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b-webhook-cert") on node "ingress-addon-legacy-988346" DevicePath ""
	Jul 17 22:12:19 ingress-addon-legacy-988346 kubelet[1882]: I0717 22:12:19.069928    1882 reconciler.go:319] Volume detached for volume "ingress-nginx-token-gwzpf" (UniqueName: "kubernetes.io/secret/2e63e2f7-c5b3-4030-9025-5d0c81a7ba3b-ingress-nginx-token-gwzpf") on node "ingress-addon-legacy-988346" DevicePath ""
	
	* 
	* ==> storage-provisioner [b1d7cdbbfeda8856553aace7687a95a60a0fa3217bc33dad1554c6eb6e89b818] <==
	* I0717 22:09:00.262473       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 22:09:00.270902       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 22:09:00.270954       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 22:09:00.277426       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 22:09:00.277520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f5f55ff-e14f-49d1-aa8e-d57893732c5c", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-988346_c61c8d49-33fa-4233-8dc6-9f01802b7c08 became leader
	I0717 22:09:00.277573       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-988346_c61c8d49-33fa-4233-8dc6-9f01802b7c08!
	I0717 22:09:00.377881       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-988346_c61c8d49-33fa-4233-8dc6-9f01802b7c08!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-988346 -n ingress-addon-legacy-988346
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-988346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (181.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-chlgz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-chlgz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-chlgz -- sh -c "ping -c 1 192.168.58.1": exit status 1 (165.641158ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-chlgz): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-dhkzz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-dhkzz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-dhkzz -- sh -c "ping -c 1 192.168.58.1": exit status 1 (172.041415ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-dhkzz): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-265316
helpers_test.go:235: (dbg) docker inspect multinode-265316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af",
	        "Created": "2023-07-17T22:17:01.954645998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310464,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:17:02.255255439Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/hostname",
	        "HostsPath": "/var/lib/docker/containers/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/hosts",
	        "LogPath": "/var/lib/docker/containers/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af-json.log",
	        "Name": "/multinode-265316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-265316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-265316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a357a5dcb4076a73458fa0d94b39519b83bf21961ffadba113b9b553fcc8be65-init/diff:/var/lib/docker/overlay2/08d413eb0908d02df131d41f2ca629e52ff8a5bbd0c0c3f9b2a348a71c834d30/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a357a5dcb4076a73458fa0d94b39519b83bf21961ffadba113b9b553fcc8be65/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a357a5dcb4076a73458fa0d94b39519b83bf21961ffadba113b9b553fcc8be65/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a357a5dcb4076a73458fa0d94b39519b83bf21961ffadba113b9b553fcc8be65/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-265316",
	                "Source": "/var/lib/docker/volumes/multinode-265316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-265316",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-265316",
	                "name.minikube.sigs.k8s.io": "multinode-265316",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d195a2feaf9a205c828f207bd4bacc1dea146c1951ec068ec8d4a53b1048ad26",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d195a2feaf9a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-265316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "529ff452b9ec",
	                        "multinode-265316"
	                    ],
	                    "NetworkID": "88c3f30f36d4deffc71775b50d385c8e384756b50efe69376ed6b5321729ff92",
	                    "EndpointID": "dbd65e4fbd4c0b137160ccd6aa759d2cab55154c9b9e3b81780fd62b3f161a33",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-265316 -n multinode-265316
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-265316 logs -n 25: (1.158047954s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-507702                           | mount-start-2-507702 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-507702 ssh -- ls                    | mount-start-2-507702 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-489605                           | mount-start-1-489605 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-507702 ssh -- ls                    | mount-start-2-507702 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-507702                           | mount-start-2-507702 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	| start   | -p mount-start-2-507702                           | mount-start-2-507702 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	| ssh     | mount-start-2-507702 ssh -- ls                    | mount-start-2-507702 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-507702                           | mount-start-2-507702 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	| delete  | -p mount-start-1-489605                           | mount-start-1-489605 | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:16 UTC |
	| start   | -p multinode-265316                               | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:16 UTC | 17 Jul 23 22:17 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- apply -f                   | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- rollout                    | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- get pods -o                | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- get pods -o                | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | busybox-67b7f59bb-chlgz --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | busybox-67b7f59bb-dhkzz --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | busybox-67b7f59bb-chlgz --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | busybox-67b7f59bb-dhkzz --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | busybox-67b7f59bb-chlgz -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | busybox-67b7f59bb-dhkzz -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- get pods -o                | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | busybox-67b7f59bb-chlgz                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC |                     |
	|         | busybox-67b7f59bb-chlgz -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC | 17 Jul 23 22:17 UTC |
	|         | busybox-67b7f59bb-dhkzz                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-265316 -- exec                       | multinode-265316     | jenkins | v1.31.0 | 17 Jul 23 22:17 UTC |                     |
	|         | busybox-67b7f59bb-dhkzz -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 22:16:56
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 22:16:56.176505  309853 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:16:56.176654  309853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:16:56.176664  309853 out.go:309] Setting ErrFile to fd 2...
	I0717 22:16:56.176668  309853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:16:56.176859  309853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:16:56.177461  309853 out.go:303] Setting JSON to false
	I0717 22:16:56.178591  309853 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7160,"bootTime":1689625056,"procs":403,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:16:56.178655  309853 start.go:138] virtualization: kvm guest
	I0717 22:16:56.181551  309853 out.go:177] * [multinode-265316] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:16:56.183509  309853 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:16:56.183506  309853 notify.go:220] Checking for updates...
	I0717 22:16:56.185460  309853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:16:56.188017  309853 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:16:56.189579  309853 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 22:16:56.191278  309853 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:16:56.192811  309853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:16:56.194789  309853 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:16:56.218544  309853 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:16:56.218722  309853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:16:56.274700  309853 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:37 SystemTime:2023-07-17 22:16:56.265613426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:16:56.274797  309853 docker.go:294] overlay module found
	I0717 22:16:56.277041  309853 out.go:177] * Using the docker driver based on user configuration
	I0717 22:16:56.278783  309853 start.go:298] selected driver: docker
	I0717 22:16:56.278802  309853 start.go:880] validating driver "docker" against <nil>
	I0717 22:16:56.279132  309853 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:16:56.280430  309853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:16:56.339859  309853 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:37 SystemTime:2023-07-17 22:16:56.330454659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:16:56.340024  309853 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 22:16:56.340220  309853 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 22:16:56.342293  309853 out.go:177] * Using Docker driver with root privileges
	I0717 22:16:56.343902  309853 cni.go:84] Creating CNI manager for ""
	I0717 22:16:56.343918  309853 cni.go:137] 0 nodes found, recommending kindnet
	I0717 22:16:56.343927  309853 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 22:16:56.343938  309853 start_flags.go:319] config:
	{Name:multinode-265316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-265316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:16:56.345563  309853 out.go:177] * Starting control plane node multinode-265316 in cluster multinode-265316
	I0717 22:16:56.346852  309853 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 22:16:56.348218  309853 out.go:177] * Pulling base image ...
	I0717 22:16:56.349511  309853 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:16:56.349544  309853 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 22:16:56.349551  309853 cache.go:57] Caching tarball of preloaded images
	I0717 22:16:56.349571  309853 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:16:56.349660  309853 preload.go:174] Found /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:16:56.349674  309853 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:16:56.350028  309853 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/config.json ...
	I0717 22:16:56.350058  309853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/config.json: {Name:mkfcd5848fa3d4e1233cbdffc0ce71a8b3349b93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:16:56.367300  309853 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 22:16:56.367330  309853 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 22:16:56.367358  309853 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:16:56.367398  309853 start.go:365] acquiring machines lock for multinode-265316: {Name:mk658c01275c37ed2f1de46ccd45fee858c3bcd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:16:56.367525  309853 start.go:369] acquired machines lock for "multinode-265316" in 93.125µs
	I0717 22:16:56.367560  309853 start.go:93] Provisioning new machine with config: &{Name:multinode-265316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-265316 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:16:56.367670  309853 start.go:125] createHost starting for "" (driver="docker")
	I0717 22:16:56.369944  309853 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 22:16:56.370215  309853 start.go:159] libmachine.API.Create for "multinode-265316" (driver="docker")
	I0717 22:16:56.370241  309853 client.go:168] LocalClient.Create starting
	I0717 22:16:56.370312  309853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem
	I0717 22:16:56.370356  309853 main.go:141] libmachine: Decoding PEM data...
	I0717 22:16:56.370379  309853 main.go:141] libmachine: Parsing certificate...
	I0717 22:16:56.370436  309853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem
	I0717 22:16:56.370467  309853 main.go:141] libmachine: Decoding PEM data...
	I0717 22:16:56.370486  309853 main.go:141] libmachine: Parsing certificate...
	I0717 22:16:56.370811  309853 cli_runner.go:164] Run: docker network inspect multinode-265316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 22:16:56.386578  309853 cli_runner.go:211] docker network inspect multinode-265316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 22:16:56.386651  309853 network_create.go:281] running [docker network inspect multinode-265316] to gather additional debugging logs...
	I0717 22:16:56.386666  309853 cli_runner.go:164] Run: docker network inspect multinode-265316
	W0717 22:16:56.404614  309853 cli_runner.go:211] docker network inspect multinode-265316 returned with exit code 1
	I0717 22:16:56.404640  309853 network_create.go:284] error running [docker network inspect multinode-265316]: docker network inspect multinode-265316: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-265316 not found
	I0717 22:16:56.404651  309853 network_create.go:286] output of [docker network inspect multinode-265316]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-265316 not found
	
	** /stderr **
	I0717 22:16:56.404702  309853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:16:56.422847  309853 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d1763de4eb47 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f3:47:25:e8} reservation:<nil>}
	I0717 22:16:56.423530  309853 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a43230}
	I0717 22:16:56.423563  309853 network_create.go:123] attempt to create docker network multinode-265316 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0717 22:16:56.423623  309853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-265316 multinode-265316
	I0717 22:16:56.483935  309853 network_create.go:107] docker network multinode-265316 192.168.58.0/24 created
	I0717 22:16:56.483974  309853 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-265316" container
	I0717 22:16:56.484048  309853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 22:16:56.501107  309853 cli_runner.go:164] Run: docker volume create multinode-265316 --label name.minikube.sigs.k8s.io=multinode-265316 --label created_by.minikube.sigs.k8s.io=true
	I0717 22:16:56.519308  309853 oci.go:103] Successfully created a docker volume multinode-265316
	I0717 22:16:56.519424  309853 cli_runner.go:164] Run: docker run --rm --name multinode-265316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-265316 --entrypoint /usr/bin/test -v multinode-265316:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 22:16:57.059685  309853 oci.go:107] Successfully prepared a docker volume multinode-265316
	I0717 22:16:57.059745  309853 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:16:57.059778  309853 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 22:16:57.059873  309853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-265316:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 22:17:01.887427  309853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-265316:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.827453209s)
	I0717 22:17:01.887476  309853 kic.go:199] duration metric: took 4.827694 seconds to extract preloaded images to volume
	W0717 22:17:01.887628  309853 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 22:17:01.887749  309853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 22:17:01.939553  309853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-265316 --name multinode-265316 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-265316 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-265316 --network multinode-265316 --ip 192.168.58.2 --volume multinode-265316:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:17:02.263098  309853 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Running}}
	I0717 22:17:02.280433  309853 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Status}}
	I0717 22:17:02.298901  309853 cli_runner.go:164] Run: docker exec multinode-265316 stat /var/lib/dpkg/alternatives/iptables
	I0717 22:17:02.372672  309853 oci.go:144] the created container "multinode-265316" has a running status.
	I0717 22:17:02.372723  309853 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa...
	I0717 22:17:02.687399  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 22:17:02.687467  309853 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 22:17:02.709717  309853 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Status}}
	I0717 22:17:02.726905  309853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 22:17:02.726930  309853 kic_runner.go:114] Args: [docker exec --privileged multinode-265316 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 22:17:02.801764  309853 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Status}}
	I0717 22:17:02.825024  309853 machine.go:88] provisioning docker machine ...
	I0717 22:17:02.825066  309853 ubuntu.go:169] provisioning hostname "multinode-265316"
	I0717 22:17:02.825146  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:02.845777  309853 main.go:141] libmachine: Using SSH client type: native
	I0717 22:17:02.846206  309853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0717 22:17:02.846229  309853 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-265316 && echo "multinode-265316" | sudo tee /etc/hostname
	I0717 22:17:03.002508  309853 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-265316
	
	I0717 22:17:03.002644  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:03.020509  309853 main.go:141] libmachine: Using SSH client type: native
	I0717 22:17:03.021191  309853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0717 22:17:03.021242  309853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-265316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-265316/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-265316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:17:03.147570  309853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:17:03.147599  309853 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-218877/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-218877/.minikube}
	I0717 22:17:03.147632  309853 ubuntu.go:177] setting up certificates
	I0717 22:17:03.147641  309853 provision.go:83] configureAuth start
	I0717 22:17:03.147695  309853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265316
	I0717 22:17:03.164488  309853 provision.go:138] copyHostCerts
	I0717 22:17:03.164529  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem
	I0717 22:17:03.164562  309853 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem, removing ...
	I0717 22:17:03.164573  309853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem
	I0717 22:17:03.164636  309853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem (1078 bytes)
	I0717 22:17:03.164715  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem
	I0717 22:17:03.164732  309853 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem, removing ...
	I0717 22:17:03.164736  309853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem
	I0717 22:17:03.164761  309853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem (1123 bytes)
	I0717 22:17:03.164803  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem
	I0717 22:17:03.164818  309853 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem, removing ...
	I0717 22:17:03.164824  309853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem
	I0717 22:17:03.164843  309853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem (1679 bytes)
	I0717 22:17:03.164888  309853 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem org=jenkins.multinode-265316 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-265316]
	I0717 22:17:03.365751  309853 provision.go:172] copyRemoteCerts
	I0717 22:17:03.365812  309853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:17:03.365847  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:03.384000  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:17:03.475950  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:17:03.476029  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:17:03.497519  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:17:03.497580  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 22:17:03.519216  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:17:03.519271  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:17:03.540381  309853 provision.go:86] duration metric: configureAuth took 392.720986ms
	I0717 22:17:03.540412  309853 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:17:03.540607  309853 config.go:182] Loaded profile config "multinode-265316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:17:03.540712  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:03.556665  309853 main.go:141] libmachine: Using SSH client type: native
	I0717 22:17:03.557071  309853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0717 22:17:03.557092  309853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:17:03.768569  309853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:17:03.768596  309853 machine.go:91] provisioned docker machine in 943.547779ms
	I0717 22:17:03.768606  309853 client.go:171] LocalClient.Create took 7.398359972s
	I0717 22:17:03.768628  309853 start.go:167] duration metric: libmachine.API.Create for "multinode-265316" took 7.398411038s
	I0717 22:17:03.768637  309853 start.go:300] post-start starting for "multinode-265316" (driver="docker")
	I0717 22:17:03.768649  309853 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:17:03.768715  309853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:17:03.768768  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:03.784505  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:17:03.876008  309853 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:17:03.878902  309853 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0717 22:17:03.878921  309853 command_runner.go:130] > NAME="Ubuntu"
	I0717 22:17:03.878927  309853 command_runner.go:130] > VERSION_ID="22.04"
	I0717 22:17:03.878932  309853 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0717 22:17:03.878937  309853 command_runner.go:130] > VERSION_CODENAME=jammy
	I0717 22:17:03.878941  309853 command_runner.go:130] > ID=ubuntu
	I0717 22:17:03.878945  309853 command_runner.go:130] > ID_LIKE=debian
	I0717 22:17:03.878949  309853 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0717 22:17:03.878953  309853 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0717 22:17:03.878960  309853 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0717 22:17:03.878968  309853 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0717 22:17:03.878972  309853 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0717 22:17:03.879028  309853 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:17:03.879051  309853 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:17:03.879061  309853 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:17:03.879068  309853 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 22:17:03.879080  309853 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/addons for local assets ...
	I0717 22:17:03.879166  309853 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/files for local assets ...
	I0717 22:17:03.879235  309853 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> 2256422.pem in /etc/ssl/certs
	I0717 22:17:03.879246  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> /etc/ssl/certs/2256422.pem
	I0717 22:17:03.879341  309853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:17:03.886802  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem --> /etc/ssl/certs/2256422.pem (1708 bytes)
	I0717 22:17:03.907816  309853 start.go:303] post-start completed in 139.159793ms
	I0717 22:17:03.908166  309853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265316
	I0717 22:17:03.923841  309853 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/config.json ...
	I0717 22:17:03.924124  309853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:17:03.924178  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:03.939695  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:17:04.028100  309853 command_runner.go:130] > 21%!
	(MISSING)I0717 22:17:04.028181  309853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:17:04.032013  309853 command_runner.go:130] > 233G
	I0717 22:17:04.032216  309853 start.go:128] duration metric: createHost completed in 7.664529183s
	I0717 22:17:04.032253  309853 start.go:83] releasing machines lock for "multinode-265316", held for 7.664715519s
	I0717 22:17:04.032326  309853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265316
	I0717 22:17:04.048298  309853 ssh_runner.go:195] Run: cat /version.json
	I0717 22:17:04.048353  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:04.048355  309853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:17:04.048417  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:04.065416  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:17:04.066031  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:17:04.151211  309853 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	I0717 22:17:04.151343  309853 ssh_runner.go:195] Run: systemctl --version
	I0717 22:17:04.246820  309853 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 22:17:04.246870  309853 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0717 22:17:04.246886  309853 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0717 22:17:04.246941  309853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:17:04.383898  309853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:17:04.387983  309853 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0717 22:17:04.388014  309853 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0717 22:17:04.388032  309853 command_runner.go:130] > Device: 37h/55d	Inode: 2846107     Links: 1
	I0717 22:17:04.388042  309853 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:17:04.388052  309853 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0717 22:17:04.388066  309853 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0717 22:17:04.388075  309853 command_runner.go:130] > Change: 2023-07-17 21:58:25.914595095 +0000
	I0717 22:17:04.388085  309853 command_runner.go:130] >  Birth: 2023-07-17 21:58:25.914595095 +0000
	I0717 22:17:04.388158  309853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:17:04.406016  309853 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:17:04.406113  309853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:17:04.433135  309853 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0717 22:17:04.433164  309853 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 22:17:04.433172  309853 start.go:466] detecting cgroup driver to use...
	I0717 22:17:04.433203  309853 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:17:04.433238  309853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:17:04.447268  309853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:17:04.457382  309853 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:17:04.457437  309853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:17:04.469785  309853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:17:04.482518  309853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:17:04.549171  309853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:17:04.629129  309853 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 22:17:04.629173  309853 docker.go:212] disabling docker service ...
	I0717 22:17:04.629235  309853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:17:04.646443  309853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:17:04.657229  309853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:17:04.667568  309853 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 22:17:04.728245  309853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:17:04.805005  309853 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 22:17:04.805104  309853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:17:04.815437  309853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:17:04.829640  309853 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 22:17:04.830389  309853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:17:04.830451  309853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:04.839129  309853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:17:04.839196  309853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:04.848087  309853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:04.856737  309853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:04.865395  309853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:17:04.873358  309853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:17:04.879915  309853 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 22:17:04.880514  309853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:17:04.887770  309853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:17:04.964066  309853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:17:05.064088  309853 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:17:05.064158  309853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:17:05.067564  309853 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 22:17:05.067595  309853 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 22:17:05.067605  309853 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0717 22:17:05.067619  309853 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:17:05.067624  309853 command_runner.go:130] > Access: 2023-07-17 22:17:05.047774405 +0000
	I0717 22:17:05.067630  309853 command_runner.go:130] > Modify: 2023-07-17 22:17:05.047774405 +0000
	I0717 22:17:05.067639  309853 command_runner.go:130] > Change: 2023-07-17 22:17:05.047774405 +0000
	I0717 22:17:05.067643  309853 command_runner.go:130] >  Birth: -
	I0717 22:17:05.067669  309853 start.go:534] Will wait 60s for crictl version
	I0717 22:17:05.067722  309853 ssh_runner.go:195] Run: which crictl
	I0717 22:17:05.070585  309853 command_runner.go:130] > /usr/bin/crictl
	I0717 22:17:05.070671  309853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:17:05.100433  309853 command_runner.go:130] > Version:  0.1.0
	I0717 22:17:05.100453  309853 command_runner.go:130] > RuntimeName:  cri-o
	I0717 22:17:05.100458  309853 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0717 22:17:05.100463  309853 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 22:17:05.102402  309853 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 22:17:05.102467  309853 ssh_runner.go:195] Run: crio --version
	I0717 22:17:05.133504  309853 command_runner.go:130] > crio version 1.24.6
	I0717 22:17:05.133530  309853 command_runner.go:130] > Version:          1.24.6
	I0717 22:17:05.133543  309853 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 22:17:05.133558  309853 command_runner.go:130] > GitTreeState:     clean
	I0717 22:17:05.133570  309853 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 22:17:05.133581  309853 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 22:17:05.133588  309853 command_runner.go:130] > Compiler:         gc
	I0717 22:17:05.133600  309853 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:17:05.133608  309853 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:17:05.133626  309853 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:17:05.133637  309853 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:17:05.133647  309853 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:17:05.134941  309853 ssh_runner.go:195] Run: crio --version
	I0717 22:17:05.165827  309853 command_runner.go:130] > crio version 1.24.6
	I0717 22:17:05.165849  309853 command_runner.go:130] > Version:          1.24.6
	I0717 22:17:05.165859  309853 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 22:17:05.165866  309853 command_runner.go:130] > GitTreeState:     clean
	I0717 22:17:05.165876  309853 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 22:17:05.165883  309853 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 22:17:05.165890  309853 command_runner.go:130] > Compiler:         gc
	I0717 22:17:05.165897  309853 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:17:05.165921  309853 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:17:05.165937  309853 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:17:05.165949  309853 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:17:05.165958  309853 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:17:05.170409  309853 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 22:17:05.171882  309853 cli_runner.go:164] Run: docker network inspect multinode-265316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:17:05.189199  309853 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0717 22:17:05.192654  309853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:17:05.202509  309853 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:17:05.202568  309853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:17:05.248159  309853 command_runner.go:130] > {
	I0717 22:17:05.248192  309853 command_runner.go:130] >   "images": [
	I0717 22:17:05.248200  309853 command_runner.go:130] >     {
	I0717 22:17:05.248219  309853 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0717 22:17:05.248226  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.248240  309853 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 22:17:05.248246  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248252  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.248276  309853 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0717 22:17:05.248292  309853 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0717 22:17:05.248301  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248307  309853 command_runner.go:130] >       "size": "65249302",
	I0717 22:17:05.248314  309853 command_runner.go:130] >       "uid": null,
	I0717 22:17:05.248318  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.248328  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.248338  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.248348  309853 command_runner.go:130] >     },
	I0717 22:17:05.248355  309853 command_runner.go:130] >     {
	I0717 22:17:05.248372  309853 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 22:17:05.248382  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.248394  309853 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 22:17:05.248403  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248413  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.248424  309853 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 22:17:05.248440  309853 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 22:17:05.248450  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248462  309853 command_runner.go:130] >       "size": "31470524",
	I0717 22:17:05.248472  309853 command_runner.go:130] >       "uid": null,
	I0717 22:17:05.248482  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.248492  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.248502  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.248511  309853 command_runner.go:130] >     },
	I0717 22:17:05.248519  309853 command_runner.go:130] >     {
	I0717 22:17:05.248528  309853 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0717 22:17:05.248538  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.248550  309853 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 22:17:05.248560  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248571  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.248587  309853 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0717 22:17:05.248603  309853 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0717 22:17:05.248611  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248620  309853 command_runner.go:130] >       "size": "53621675",
	I0717 22:17:05.248629  309853 command_runner.go:130] >       "uid": null,
	I0717 22:17:05.248640  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.248651  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.248661  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.248670  309853 command_runner.go:130] >     },
	I0717 22:17:05.248678  309853 command_runner.go:130] >     {
	I0717 22:17:05.248692  309853 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0717 22:17:05.248703  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.248712  309853 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 22:17:05.248722  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248732  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.248748  309853 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0717 22:17:05.248764  309853 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0717 22:17:05.248777  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248787  309853 command_runner.go:130] >       "size": "297083935",
	I0717 22:17:05.248797  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.248804  309853 command_runner.go:130] >         "value": "0"
	I0717 22:17:05.248809  309853 command_runner.go:130] >       },
	I0717 22:17:05.248820  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.248830  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.248837  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.248846  309853 command_runner.go:130] >     },
	I0717 22:17:05.248855  309853 command_runner.go:130] >     {
	I0717 22:17:05.248868  309853 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0717 22:17:05.248878  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.248890  309853 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 22:17:05.248898  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248903  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.248919  309853 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0717 22:17:05.248935  309853 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 22:17:05.248945  309853 command_runner.go:130] >       ],
	I0717 22:17:05.248955  309853 command_runner.go:130] >       "size": "122065872",
	I0717 22:17:05.248964  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.248974  309853 command_runner.go:130] >         "value": "0"
	I0717 22:17:05.248984  309853 command_runner.go:130] >       },
	I0717 22:17:05.248993  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.249000  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.249005  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.249014  309853 command_runner.go:130] >     },
	I0717 22:17:05.249023  309853 command_runner.go:130] >     {
	I0717 22:17:05.249035  309853 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0717 22:17:05.249045  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.249057  309853 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 22:17:05.249066  309853 command_runner.go:130] >       ],
	I0717 22:17:05.249073  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.249088  309853 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0717 22:17:05.249098  309853 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0717 22:17:05.249106  309853 command_runner.go:130] >       ],
	I0717 22:17:05.249117  309853 command_runner.go:130] >       "size": "113919286",
	I0717 22:17:05.249124  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.249135  309853 command_runner.go:130] >         "value": "0"
	I0717 22:17:05.249144  309853 command_runner.go:130] >       },
	I0717 22:17:05.249154  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.249163  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.249173  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.249182  309853 command_runner.go:130] >     },
	I0717 22:17:05.249189  309853 command_runner.go:130] >     {
	I0717 22:17:05.249196  309853 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0717 22:17:05.249206  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.249227  309853 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 22:17:05.249237  309853 command_runner.go:130] >       ],
	I0717 22:17:05.249244  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.249259  309853 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0717 22:17:05.249276  309853 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 22:17:05.249283  309853 command_runner.go:130] >       ],
	I0717 22:17:05.249288  309853 command_runner.go:130] >       "size": "72713623",
	I0717 22:17:05.249297  309853 command_runner.go:130] >       "uid": null,
	I0717 22:17:05.249304  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.249314  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.249321  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.249331  309853 command_runner.go:130] >     },
	I0717 22:17:05.249337  309853 command_runner.go:130] >     {
	I0717 22:17:05.249351  309853 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0717 22:17:05.249360  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.249373  309853 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 22:17:05.249382  309853 command_runner.go:130] >       ],
	I0717 22:17:05.249387  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.249445  309853 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0717 22:17:05.249463  309853 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 22:17:05.249469  309853 command_runner.go:130] >       ],
	I0717 22:17:05.249477  309853 command_runner.go:130] >       "size": "59811126",
	I0717 22:17:05.249485  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.249495  309853 command_runner.go:130] >         "value": "0"
	I0717 22:17:05.249502  309853 command_runner.go:130] >       },
	I0717 22:17:05.249512  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.249521  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.249531  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.249539  309853 command_runner.go:130] >     },
	I0717 22:17:05.249543  309853 command_runner.go:130] >     {
	I0717 22:17:05.249549  309853 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 22:17:05.249556  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.249560  309853 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 22:17:05.249566  309853 command_runner.go:130] >       ],
	I0717 22:17:05.249570  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.249579  309853 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 22:17:05.249588  309853 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 22:17:05.249592  309853 command_runner.go:130] >       ],
	I0717 22:17:05.249596  309853 command_runner.go:130] >       "size": "750414",
	I0717 22:17:05.249600  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.249605  309853 command_runner.go:130] >         "value": "65535"
	I0717 22:17:05.249611  309853 command_runner.go:130] >       },
	I0717 22:17:05.249615  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.249621  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.249626  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.249631  309853 command_runner.go:130] >     }
	I0717 22:17:05.249634  309853 command_runner.go:130] >   ]
	I0717 22:17:05.249640  309853 command_runner.go:130] > }
	I0717 22:17:05.250202  309853 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:17:05.250221  309853 crio.go:415] Images already preloaded, skipping extraction
	I0717 22:17:05.250264  309853 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 22:17:05.281531  309853 command_runner.go:130] > {
	I0717 22:17:05.281556  309853 command_runner.go:130] >   "images": [
	I0717 22:17:05.281560  309853 command_runner.go:130] >     {
	I0717 22:17:05.281568  309853 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0717 22:17:05.281573  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.281579  309853 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 22:17:05.281583  309853 command_runner.go:130] >       ],
	I0717 22:17:05.281588  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.281596  309853 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0717 22:17:05.281605  309853 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0717 22:17:05.281611  309853 command_runner.go:130] >       ],
	I0717 22:17:05.281619  309853 command_runner.go:130] >       "size": "65249302",
	I0717 22:17:05.281630  309853 command_runner.go:130] >       "uid": null,
	I0717 22:17:05.281641  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.281651  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.281657  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.281661  309853 command_runner.go:130] >     },
	I0717 22:17:05.281670  309853 command_runner.go:130] >     {
	I0717 22:17:05.281678  309853 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 22:17:05.281682  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.281688  309853 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 22:17:05.281697  309853 command_runner.go:130] >       ],
	I0717 22:17:05.281705  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.281718  309853 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 22:17:05.281731  309853 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 22:17:05.281739  309853 command_runner.go:130] >       ],
	I0717 22:17:05.281748  309853 command_runner.go:130] >       "size": "31470524",
	I0717 22:17:05.281754  309853 command_runner.go:130] >       "uid": null,
	I0717 22:17:05.281761  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.281765  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.281771  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.281777  309853 command_runner.go:130] >     },
	I0717 22:17:05.281786  309853 command_runner.go:130] >     {
	I0717 22:17:05.281797  309853 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0717 22:17:05.281807  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.281816  309853 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 22:17:05.281824  309853 command_runner.go:130] >       ],
	I0717 22:17:05.281831  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.281853  309853 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0717 22:17:05.281865  309853 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0717 22:17:05.281875  309853 command_runner.go:130] >       ],
	I0717 22:17:05.281886  309853 command_runner.go:130] >       "size": "53621675",
	I0717 22:17:05.281896  309853 command_runner.go:130] >       "uid": null,
	I0717 22:17:05.281903  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.281913  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.281920  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.281928  309853 command_runner.go:130] >     },
	I0717 22:17:05.281934  309853 command_runner.go:130] >     {
	I0717 22:17:05.281943  309853 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0717 22:17:05.281948  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.281960  309853 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 22:17:05.281970  309853 command_runner.go:130] >       ],
	I0717 22:17:05.281978  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.281992  309853 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0717 22:17:05.282007  309853 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0717 22:17:05.282021  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282025  309853 command_runner.go:130] >       "size": "297083935",
	I0717 22:17:05.282029  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.282035  309853 command_runner.go:130] >         "value": "0"
	I0717 22:17:05.282045  309853 command_runner.go:130] >       },
	I0717 22:17:05.282052  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.282063  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.282071  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.282078  309853 command_runner.go:130] >     },
	I0717 22:17:05.282086  309853 command_runner.go:130] >     {
	I0717 22:17:05.282100  309853 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0717 22:17:05.282108  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.282113  309853 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 22:17:05.282122  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282129  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.282155  309853 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0717 22:17:05.282172  309853 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 22:17:05.282181  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282191  309853 command_runner.go:130] >       "size": "122065872",
	I0717 22:17:05.282197  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.282202  309853 command_runner.go:130] >         "value": "0"
	I0717 22:17:05.282208  309853 command_runner.go:130] >       },
	I0717 22:17:05.282215  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.282221  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.282228  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.282234  309853 command_runner.go:130] >     },
	I0717 22:17:05.282243  309853 command_runner.go:130] >     {
	I0717 22:17:05.282254  309853 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0717 22:17:05.282264  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.282275  309853 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 22:17:05.282282  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282287  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.282300  309853 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0717 22:17:05.282318  309853 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0717 22:17:05.282327  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282334  309853 command_runner.go:130] >       "size": "113919286",
	I0717 22:17:05.282344  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.282350  309853 command_runner.go:130] >         "value": "0"
	I0717 22:17:05.282359  309853 command_runner.go:130] >       },
	I0717 22:17:05.282365  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.282372  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.282379  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.282389  309853 command_runner.go:130] >     },
	I0717 22:17:05.282399  309853 command_runner.go:130] >     {
	I0717 22:17:05.282409  309853 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0717 22:17:05.282419  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.282428  309853 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 22:17:05.282434  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282444  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.282452  309853 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0717 22:17:05.282465  309853 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 22:17:05.282474  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282482  309853 command_runner.go:130] >       "size": "72713623",
	I0717 22:17:05.282491  309853 command_runner.go:130] >       "uid": null,
	I0717 22:17:05.282499  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.282505  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.282515  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.282521  309853 command_runner.go:130] >     },
	I0717 22:17:05.282530  309853 command_runner.go:130] >     {
	I0717 22:17:05.282537  309853 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0717 22:17:05.282544  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.282553  309853 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 22:17:05.282562  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282569  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.282596  309853 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0717 22:17:05.282611  309853 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 22:17:05.282619  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282624  309853 command_runner.go:130] >       "size": "59811126",
	I0717 22:17:05.282631  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.282638  309853 command_runner.go:130] >         "value": "0"
	I0717 22:17:05.282648  309853 command_runner.go:130] >       },
	I0717 22:17:05.282655  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.282665  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.282676  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.282685  309853 command_runner.go:130] >     },
	I0717 22:17:05.282691  309853 command_runner.go:130] >     {
	I0717 22:17:05.282704  309853 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 22:17:05.282709  309853 command_runner.go:130] >       "repoTags": [
	I0717 22:17:05.282716  309853 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 22:17:05.282722  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282733  309853 command_runner.go:130] >       "repoDigests": [
	I0717 22:17:05.282749  309853 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 22:17:05.282764  309853 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 22:17:05.282773  309853 command_runner.go:130] >       ],
	I0717 22:17:05.282780  309853 command_runner.go:130] >       "size": "750414",
	I0717 22:17:05.282789  309853 command_runner.go:130] >       "uid": {
	I0717 22:17:05.282795  309853 command_runner.go:130] >         "value": "65535"
	I0717 22:17:05.282801  309853 command_runner.go:130] >       },
	I0717 22:17:05.282807  309853 command_runner.go:130] >       "username": "",
	I0717 22:17:05.282814  309853 command_runner.go:130] >       "spec": null,
	I0717 22:17:05.282825  309853 command_runner.go:130] >       "pinned": false
	I0717 22:17:05.282831  309853 command_runner.go:130] >     }
	I0717 22:17:05.282840  309853 command_runner.go:130] >   ]
	I0717 22:17:05.282850  309853 command_runner.go:130] > }
	I0717 22:17:05.282995  309853 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 22:17:05.283010  309853 cache_images.go:84] Images are preloaded, skipping loading
	I0717 22:17:05.283076  309853 ssh_runner.go:195] Run: crio config
	I0717 22:17:05.318195  309853 command_runner.go:130] ! time="2023-07-17 22:17:05.317684683Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0717 22:17:05.318228  309853 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 22:17:05.323011  309853 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 22:17:05.323038  309853 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 22:17:05.323048  309853 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 22:17:05.323065  309853 command_runner.go:130] > #
	I0717 22:17:05.323079  309853 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 22:17:05.323091  309853 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 22:17:05.323102  309853 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 22:17:05.323118  309853 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 22:17:05.323132  309853 command_runner.go:130] > # reload'.
	I0717 22:17:05.323143  309853 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 22:17:05.323153  309853 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 22:17:05.323165  309853 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 22:17:05.323179  309853 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 22:17:05.323188  309853 command_runner.go:130] > [crio]
	I0717 22:17:05.323201  309853 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 22:17:05.323213  309853 command_runner.go:130] > # containers images, in this directory.
	I0717 22:17:05.323231  309853 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0717 22:17:05.323246  309853 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 22:17:05.323258  309853 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0717 22:17:05.323270  309853 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 22:17:05.323285  309853 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 22:17:05.323296  309853 command_runner.go:130] > # storage_driver = "vfs"
	I0717 22:17:05.323309  309853 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 22:17:05.323323  309853 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 22:17:05.323334  309853 command_runner.go:130] > # storage_option = [
	I0717 22:17:05.323340  309853 command_runner.go:130] > # ]
	I0717 22:17:05.323352  309853 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 22:17:05.323366  309853 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 22:17:05.323378  309853 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 22:17:05.323391  309853 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 22:17:05.323405  309853 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 22:17:05.323432  309853 command_runner.go:130] > # always happen on a node reboot
	I0717 22:17:05.323441  309853 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 22:17:05.323455  309853 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 22:17:05.323469  309853 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 22:17:05.323485  309853 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 22:17:05.323498  309853 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 22:17:05.323514  309853 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 22:17:05.323531  309853 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 22:17:05.323542  309853 command_runner.go:130] > # internal_wipe = true
	I0717 22:17:05.323558  309853 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 22:17:05.323574  309853 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 22:17:05.323587  309853 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 22:17:05.323600  309853 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 22:17:05.323613  309853 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 22:17:05.323620  309853 command_runner.go:130] > [crio.api]
	I0717 22:17:05.323633  309853 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 22:17:05.323649  309853 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 22:17:05.323662  309853 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 22:17:05.323674  309853 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 22:17:05.323689  309853 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 22:17:05.323700  309853 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 22:17:05.323708  309853 command_runner.go:130] > # stream_port = "0"
	I0717 22:17:05.323721  309853 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 22:17:05.323732  309853 command_runner.go:130] > # stream_enable_tls = false
	I0717 22:17:05.323744  309853 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 22:17:05.323755  309853 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 22:17:05.323770  309853 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 22:17:05.323784  309853 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 22:17:05.323793  309853 command_runner.go:130] > # minutes.
	I0717 22:17:05.323803  309853 command_runner.go:130] > # stream_tls_cert = ""
	I0717 22:17:05.323817  309853 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 22:17:05.323830  309853 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 22:17:05.323839  309853 command_runner.go:130] > # stream_tls_key = ""
	I0717 22:17:05.323853  309853 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 22:17:05.323868  309853 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 22:17:05.323880  309853 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 22:17:05.323891  309853 command_runner.go:130] > # stream_tls_ca = ""
	I0717 22:17:05.323907  309853 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:17:05.323918  309853 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0717 22:17:05.323931  309853 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:17:05.323943  309853 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0717 22:17:05.323973  309853 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 22:17:05.323987  309853 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 22:17:05.323994  309853 command_runner.go:130] > [crio.runtime]
	I0717 22:17:05.324008  309853 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 22:17:05.324018  309853 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 22:17:05.324030  309853 command_runner.go:130] > # "nofile=1024:2048"
	I0717 22:17:05.324048  309853 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 22:17:05.324058  309853 command_runner.go:130] > # default_ulimits = [
	I0717 22:17:05.324064  309853 command_runner.go:130] > # ]
	I0717 22:17:05.324078  309853 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 22:17:05.324088  309853 command_runner.go:130] > # no_pivot = false
	I0717 22:17:05.324099  309853 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 22:17:05.324113  309853 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 22:17:05.324125  309853 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 22:17:05.324138  309853 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 22:17:05.324150  309853 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 22:17:05.324166  309853 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:17:05.324175  309853 command_runner.go:130] > # conmon = ""
	I0717 22:17:05.324183  309853 command_runner.go:130] > # Cgroup setting for conmon
	I0717 22:17:05.324198  309853 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 22:17:05.324208  309853 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 22:17:05.324223  309853 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 22:17:05.324236  309853 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 22:17:05.324252  309853 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:17:05.324261  309853 command_runner.go:130] > # conmon_env = [
	I0717 22:17:05.324267  309853 command_runner.go:130] > # ]
	I0717 22:17:05.324279  309853 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 22:17:05.324292  309853 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 22:17:05.324305  309853 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 22:17:05.324316  309853 command_runner.go:130] > # default_env = [
	I0717 22:17:05.324325  309853 command_runner.go:130] > # ]
	I0717 22:17:05.324336  309853 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 22:17:05.324345  309853 command_runner.go:130] > # selinux = false
	I0717 22:17:05.324357  309853 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 22:17:05.324371  309853 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 22:17:05.324384  309853 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 22:17:05.324395  309853 command_runner.go:130] > # seccomp_profile = ""
	I0717 22:17:05.324408  309853 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 22:17:05.324421  309853 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 22:17:05.324432  309853 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 22:17:05.324443  309853 command_runner.go:130] > # which might increase security.
	I0717 22:17:05.324455  309853 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0717 22:17:05.324470  309853 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 22:17:05.324486  309853 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 22:17:05.324501  309853 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 22:17:05.324518  309853 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 22:17:05.324531  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:05.324542  309853 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 22:17:05.324555  309853 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 22:17:05.324569  309853 command_runner.go:130] > # the cgroup blockio controller.
	I0717 22:17:05.324581  309853 command_runner.go:130] > # blockio_config_file = ""
	I0717 22:17:05.324596  309853 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 22:17:05.324605  309853 command_runner.go:130] > # irqbalance daemon.
	I0717 22:17:05.324614  309853 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 22:17:05.324629  309853 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 22:17:05.324650  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:05.324661  309853 command_runner.go:130] > # rdt_config_file = ""
	I0717 22:17:05.324672  309853 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 22:17:05.324682  309853 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 22:17:05.324696  309853 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 22:17:05.324706  309853 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 22:17:05.324721  309853 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 22:17:05.324735  309853 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 22:17:05.324745  309853 command_runner.go:130] > # will be added.
	I0717 22:17:05.324756  309853 command_runner.go:130] > # default_capabilities = [
	I0717 22:17:05.324765  309853 command_runner.go:130] > # 	"CHOWN",
	I0717 22:17:05.324773  309853 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 22:17:05.324781  309853 command_runner.go:130] > # 	"FSETID",
	I0717 22:17:05.324788  309853 command_runner.go:130] > # 	"FOWNER",
	I0717 22:17:05.324798  309853 command_runner.go:130] > # 	"SETGID",
	I0717 22:17:05.324806  309853 command_runner.go:130] > # 	"SETUID",
	I0717 22:17:05.324816  309853 command_runner.go:130] > # 	"SETPCAP",
	I0717 22:17:05.324827  309853 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 22:17:05.324836  309853 command_runner.go:130] > # 	"KILL",
	I0717 22:17:05.324843  309853 command_runner.go:130] > # ]
	I0717 22:17:05.324859  309853 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 22:17:05.324873  309853 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 22:17:05.324884  309853 command_runner.go:130] > # add_inheritable_capabilities = true
	I0717 22:17:05.324899  309853 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 22:17:05.324912  309853 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:17:05.324923  309853 command_runner.go:130] > # default_sysctls = [
	I0717 22:17:05.324929  309853 command_runner.go:130] > # ]
	I0717 22:17:05.324941  309853 command_runner.go:130] > # List of devices on the host that a
	I0717 22:17:05.324952  309853 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 22:17:05.324962  309853 command_runner.go:130] > # allowed_devices = [
	I0717 22:17:05.324969  309853 command_runner.go:130] > # 	"/dev/fuse",
	I0717 22:17:05.324978  309853 command_runner.go:130] > # ]
	I0717 22:17:05.324991  309853 command_runner.go:130] > # List of additional devices. specified as
	I0717 22:17:05.325025  309853 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 22:17:05.325039  309853 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 22:17:05.325049  309853 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:17:05.325060  309853 command_runner.go:130] > # additional_devices = [
	I0717 22:17:05.325069  309853 command_runner.go:130] > # ]
	I0717 22:17:05.325079  309853 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 22:17:05.325089  309853 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 22:17:05.325099  309853 command_runner.go:130] > # 	"/etc/cdi",
	I0717 22:17:05.325106  309853 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 22:17:05.325115  309853 command_runner.go:130] > # ]
	I0717 22:17:05.325127  309853 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 22:17:05.325141  309853 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 22:17:05.325151  309853 command_runner.go:130] > # Defaults to false.
	I0717 22:17:05.325163  309853 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 22:17:05.325178  309853 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 22:17:05.325192  309853 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 22:17:05.325201  309853 command_runner.go:130] > # hooks_dir = [
	I0717 22:17:05.325209  309853 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 22:17:05.325218  309853 command_runner.go:130] > # ]
	I0717 22:17:05.325230  309853 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 22:17:05.325245  309853 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 22:17:05.325258  309853 command_runner.go:130] > # its default mounts from the following two files:
	I0717 22:17:05.325266  309853 command_runner.go:130] > #
	I0717 22:17:05.325278  309853 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 22:17:05.325292  309853 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 22:17:05.325307  309853 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 22:17:05.325316  309853 command_runner.go:130] > #
	I0717 22:17:05.325328  309853 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 22:17:05.325342  309853 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 22:17:05.325357  309853 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 22:17:05.325369  309853 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 22:17:05.325377  309853 command_runner.go:130] > #
	I0717 22:17:05.325385  309853 command_runner.go:130] > # default_mounts_file = ""
	I0717 22:17:05.325398  309853 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 22:17:05.325413  309853 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 22:17:05.325423  309853 command_runner.go:130] > # pids_limit = 0
	I0717 22:17:05.325437  309853 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 22:17:05.325451  309853 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 22:17:05.325465  309853 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 22:17:05.325486  309853 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 22:17:05.325496  309853 command_runner.go:130] > # log_size_max = -1
	I0717 22:17:05.325512  309853 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 22:17:05.325523  309853 command_runner.go:130] > # log_to_journald = false
	I0717 22:17:05.325534  309853 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 22:17:05.325545  309853 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 22:17:05.325555  309853 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 22:17:05.325567  309853 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 22:17:05.325580  309853 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 22:17:05.325590  309853 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 22:17:05.325604  309853 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 22:17:05.325614  309853 command_runner.go:130] > # read_only = false
	I0717 22:17:05.325626  309853 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 22:17:05.325644  309853 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 22:17:05.325656  309853 command_runner.go:130] > # live configuration reload.
	I0717 22:17:05.325664  309853 command_runner.go:130] > # log_level = "info"
	I0717 22:17:05.325677  309853 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 22:17:05.325689  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:05.325699  309853 command_runner.go:130] > # log_filter = ""
	I0717 22:17:05.325713  309853 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 22:17:05.325727  309853 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 22:17:05.325737  309853 command_runner.go:130] > # separated by comma.
	I0717 22:17:05.325748  309853 command_runner.go:130] > # uid_mappings = ""
	I0717 22:17:05.325762  309853 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 22:17:05.325776  309853 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 22:17:05.325786  309853 command_runner.go:130] > # separated by comma.
	I0717 22:17:05.325795  309853 command_runner.go:130] > # gid_mappings = ""
	I0717 22:17:05.325808  309853 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 22:17:05.325823  309853 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:17:05.325837  309853 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:17:05.325848  309853 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 22:17:05.325862  309853 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 22:17:05.325876  309853 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:17:05.325890  309853 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:17:05.325899  309853 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 22:17:05.325914  309853 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 22:17:05.325928  309853 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 22:17:05.325944  309853 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 22:17:05.325956  309853 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 22:17:05.325969  309853 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 22:17:05.325989  309853 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 22:17:05.326001  309853 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 22:17:05.326014  309853 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 22:17:05.326024  309853 command_runner.go:130] > # drop_infra_ctr = true
	I0717 22:17:05.326039  309853 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 22:17:05.326053  309853 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 22:17:05.326068  309853 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 22:17:05.326079  309853 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 22:17:05.326091  309853 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 22:17:05.326102  309853 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 22:17:05.326111  309853 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 22:17:05.326127  309853 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 22:17:05.326137  309853 command_runner.go:130] > # pinns_path = ""
	I0717 22:17:05.326148  309853 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 22:17:05.326162  309853 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 22:17:05.326176  309853 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 22:17:05.326187  309853 command_runner.go:130] > # default_runtime = "runc"
	I0717 22:17:05.326201  309853 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 22:17:05.326218  309853 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 22:17:05.326236  309853 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 22:17:05.326249  309853 command_runner.go:130] > # creation as a file is not desired either.
	I0717 22:17:05.326267  309853 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 22:17:05.326279  309853 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 22:17:05.326290  309853 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 22:17:05.326299  309853 command_runner.go:130] > # ]
	I0717 22:17:05.326312  309853 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 22:17:05.326326  309853 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 22:17:05.326341  309853 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 22:17:05.326355  309853 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 22:17:05.326364  309853 command_runner.go:130] > #
	I0717 22:17:05.326373  309853 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 22:17:05.326385  309853 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 22:17:05.326395  309853 command_runner.go:130] > #  runtime_type = "oci"
	I0717 22:17:05.326418  309853 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 22:17:05.326431  309853 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 22:17:05.326442  309853 command_runner.go:130] > #  allowed_annotations = []
	I0717 22:17:05.326452  309853 command_runner.go:130] > # Where:
	I0717 22:17:05.326463  309853 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 22:17:05.326478  309853 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 22:17:05.326492  309853 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 22:17:05.326506  309853 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 22:17:05.326517  309853 command_runner.go:130] > #   in $PATH.
	I0717 22:17:05.326531  309853 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 22:17:05.326543  309853 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 22:17:05.326557  309853 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 22:17:05.326566  309853 command_runner.go:130] > #   state.
	I0717 22:17:05.326578  309853 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 22:17:05.326591  309853 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 22:17:05.326606  309853 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 22:17:05.326619  309853 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 22:17:05.326635  309853 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 22:17:05.326654  309853 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 22:17:05.326666  309853 command_runner.go:130] > #   The currently recognized values are:
	I0717 22:17:05.326683  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 22:17:05.326701  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 22:17:05.326715  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 22:17:05.326729  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 22:17:05.326745  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 22:17:05.326757  309853 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 22:17:05.326771  309853 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 22:17:05.326786  309853 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 22:17:05.326798  309853 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 22:17:05.326809  309853 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 22:17:05.326821  309853 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0717 22:17:05.326830  309853 command_runner.go:130] > runtime_type = "oci"
	I0717 22:17:05.326838  309853 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 22:17:05.326849  309853 command_runner.go:130] > runtime_config_path = ""
	I0717 22:17:05.326857  309853 command_runner.go:130] > monitor_path = ""
	I0717 22:17:05.326867  309853 command_runner.go:130] > monitor_cgroup = ""
	I0717 22:17:05.326878  309853 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 22:17:05.326919  309853 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 22:17:05.326930  309853 command_runner.go:130] > # running containers
	I0717 22:17:05.326939  309853 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 22:17:05.326953  309853 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 22:17:05.326968  309853 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 22:17:05.326982  309853 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 22:17:05.326994  309853 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 22:17:05.327006  309853 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 22:17:05.327016  309853 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 22:17:05.327024  309853 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 22:17:05.327036  309853 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 22:17:05.327047  309853 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 22:17:05.327062  309853 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 22:17:05.327075  309853 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 22:17:05.327089  309853 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 22:17:05.327106  309853 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 22:17:05.327122  309853 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 22:17:05.327136  309853 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 22:17:05.327155  309853 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 22:17:05.327173  309853 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 22:17:05.327187  309853 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 22:17:05.327202  309853 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 22:17:05.327211  309853 command_runner.go:130] > # Example:
	I0717 22:17:05.327221  309853 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 22:17:05.327233  309853 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 22:17:05.327246  309853 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 22:17:05.327259  309853 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 22:17:05.327268  309853 command_runner.go:130] > # cpuset = 0
	I0717 22:17:05.327275  309853 command_runner.go:130] > # cpushares = "0-1"
	I0717 22:17:05.327284  309853 command_runner.go:130] > # Where:
	I0717 22:17:05.327295  309853 command_runner.go:130] > # The workload name is workload-type.
	I0717 22:17:05.327310  309853 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 22:17:05.327323  309853 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 22:17:05.327337  309853 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 22:17:05.327355  309853 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 22:17:05.327372  309853 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 22:17:05.327381  309853 command_runner.go:130] > # 
	I0717 22:17:05.327394  309853 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 22:17:05.327402  309853 command_runner.go:130] > #
	I0717 22:17:05.327429  309853 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 22:17:05.327444  309853 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 22:17:05.327459  309853 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 22:17:05.327473  309853 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 22:17:05.327487  309853 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 22:17:05.327497  309853 command_runner.go:130] > [crio.image]
	I0717 22:17:05.327510  309853 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 22:17:05.327521  309853 command_runner.go:130] > # default_transport = "docker://"
	I0717 22:17:05.327536  309853 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 22:17:05.327551  309853 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:17:05.327562  309853 command_runner.go:130] > # global_auth_file = ""
	I0717 22:17:05.327574  309853 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 22:17:05.327587  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:05.327599  309853 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 22:17:05.327611  309853 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 22:17:05.327625  309853 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:17:05.327641  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:05.327656  309853 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 22:17:05.327671  309853 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 22:17:05.327684  309853 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 22:17:05.327697  309853 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 22:17:05.327711  309853 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 22:17:05.327722  309853 command_runner.go:130] > # pause_command = "/pause"
	I0717 22:17:05.327735  309853 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 22:17:05.327750  309853 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 22:17:05.327764  309853 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 22:17:05.327779  309853 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 22:17:05.327791  309853 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 22:17:05.327802  309853 command_runner.go:130] > # signature_policy = ""
	I0717 22:17:05.327821  309853 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 22:17:05.327835  309853 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 22:17:05.327845  309853 command_runner.go:130] > # changing them here.
	I0717 22:17:05.327857  309853 command_runner.go:130] > # insecure_registries = [
	I0717 22:17:05.327866  309853 command_runner.go:130] > # ]
	I0717 22:17:05.327878  309853 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 22:17:05.327891  309853 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 22:17:05.327901  309853 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 22:17:05.327914  309853 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 22:17:05.327926  309853 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 22:17:05.327940  309853 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 22:17:05.327949  309853 command_runner.go:130] > # CNI plugins.
	I0717 22:17:05.327956  309853 command_runner.go:130] > [crio.network]
	I0717 22:17:05.327970  309853 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 22:17:05.327983  309853 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 22:17:05.327993  309853 command_runner.go:130] > # cni_default_network = ""
	I0717 22:17:05.328007  309853 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 22:17:05.328018  309853 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 22:17:05.328032  309853 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 22:17:05.328041  309853 command_runner.go:130] > # plugin_dirs = [
	I0717 22:17:05.328049  309853 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 22:17:05.328058  309853 command_runner.go:130] > # ]
	I0717 22:17:05.328068  309853 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 22:17:05.328078  309853 command_runner.go:130] > [crio.metrics]
	I0717 22:17:05.328090  309853 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 22:17:05.328101  309853 command_runner.go:130] > # enable_metrics = false
	I0717 22:17:05.328113  309853 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 22:17:05.328125  309853 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 22:17:05.328139  309853 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 22:17:05.328154  309853 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 22:17:05.328168  309853 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 22:17:05.328178  309853 command_runner.go:130] > # metrics_collectors = [
	I0717 22:17:05.328189  309853 command_runner.go:130] > # 	"operations",
	I0717 22:17:05.328200  309853 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 22:17:05.328212  309853 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 22:17:05.328223  309853 command_runner.go:130] > # 	"operations_errors",
	I0717 22:17:05.328234  309853 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 22:17:05.328245  309853 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 22:17:05.328256  309853 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 22:17:05.328267  309853 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 22:17:05.328277  309853 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 22:17:05.328285  309853 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 22:17:05.328296  309853 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 22:17:05.328304  309853 command_runner.go:130] > # 	"containers_oom_total",
	I0717 22:17:05.328315  309853 command_runner.go:130] > # 	"containers_oom",
	I0717 22:17:05.328325  309853 command_runner.go:130] > # 	"processes_defunct",
	I0717 22:17:05.328336  309853 command_runner.go:130] > # 	"operations_total",
	I0717 22:17:05.328347  309853 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 22:17:05.328362  309853 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 22:17:05.328374  309853 command_runner.go:130] > # 	"operations_errors_total",
	I0717 22:17:05.328385  309853 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 22:17:05.328394  309853 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 22:17:05.328405  309853 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 22:17:05.328416  309853 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 22:17:05.328427  309853 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 22:17:05.328437  309853 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 22:17:05.328443  309853 command_runner.go:130] > # ]
	I0717 22:17:05.328456  309853 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 22:17:05.328467  309853 command_runner.go:130] > # metrics_port = 9090
	I0717 22:17:05.328480  309853 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 22:17:05.328491  309853 command_runner.go:130] > # metrics_socket = ""
	I0717 22:17:05.328503  309853 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 22:17:05.328517  309853 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 22:17:05.328532  309853 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 22:17:05.328544  309853 command_runner.go:130] > # certificate on any modification event.
	I0717 22:17:05.328555  309853 command_runner.go:130] > # metrics_cert = ""
	I0717 22:17:05.328566  309853 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 22:17:05.328578  309853 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 22:17:05.328588  309853 command_runner.go:130] > # metrics_key = ""
	I0717 22:17:05.328602  309853 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 22:17:05.328611  309853 command_runner.go:130] > [crio.tracing]
	I0717 22:17:05.328622  309853 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 22:17:05.328632  309853 command_runner.go:130] > # enable_tracing = false
	I0717 22:17:05.328649  309853 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 22:17:05.328661  309853 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 22:17:05.328673  309853 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 22:17:05.328685  309853 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 22:17:05.328699  309853 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 22:17:05.328707  309853 command_runner.go:130] > [crio.stats]
	I0717 22:17:05.328715  309853 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 22:17:05.328723  309853 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 22:17:05.328730  309853 command_runner.go:130] > # stats_collection_period = 0
	I0717 22:17:05.328830  309853 cni.go:84] Creating CNI manager for ""
	I0717 22:17:05.328841  309853 cni.go:137] 1 nodes found, recommending kindnet
	I0717 22:17:05.328852  309853 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:17:05.328879  309853 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-265316 NodeName:multinode-265316 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:17:05.329049  309853 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-265316"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:17:05.329138  309853 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-265316 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-265316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:17:05.329209  309853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:17:05.337959  309853 command_runner.go:130] > kubeadm
	I0717 22:17:05.337983  309853 command_runner.go:130] > kubectl
	I0717 22:17:05.337988  309853 command_runner.go:130] > kubelet
	I0717 22:17:05.338011  309853 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:17:05.338082  309853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 22:17:05.346052  309853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0717 22:17:05.362307  309853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:17:05.380757  309853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0717 22:17:05.397138  309853 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0717 22:17:05.400408  309853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:17:05.410393  309853 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316 for IP: 192.168.58.2
	I0717 22:17:05.410423  309853 certs.go:190] acquiring lock for shared ca certs: {Name:mk5feafb57b96958f78245f8503644226fe57af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:05.410576  309853 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key
	I0717 22:17:05.410617  309853 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key
	I0717 22:17:05.410662  309853 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.key
	I0717 22:17:05.410684  309853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.crt with IP's: []
	I0717 22:17:05.586434  309853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.crt ...
	I0717 22:17:05.586470  309853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.crt: {Name:mkc636b6d0c4d925533f60582df70553fae9f0d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:05.586646  309853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.key ...
	I0717 22:17:05.586657  309853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.key: {Name:mkf4741d4757c63b64ad26c5782e5aacce70ed4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:05.586726  309853 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.key.cee25041
	I0717 22:17:05.586739  309853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 22:17:05.936372  309853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.crt.cee25041 ...
	I0717 22:17:05.936406  309853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.crt.cee25041: {Name:mkf8b62037143330909f76ce87deea33a7141f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:05.936575  309853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.key.cee25041 ...
	I0717 22:17:05.936587  309853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.key.cee25041: {Name:mk27d9ac336ed921e4a817c37d724cd68daca888 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:05.936653  309853 certs.go:337] copying /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.crt
	I0717 22:17:05.936721  309853 certs.go:341] copying /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.key
	I0717 22:17:05.936769  309853 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.key
	I0717 22:17:05.936782  309853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.crt with IP's: []
	I0717 22:17:05.987371  309853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.crt ...
	I0717 22:17:05.987398  309853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.crt: {Name:mkdc0de0d50be535d2a7f50236921cd02875683a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:05.987555  309853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.key ...
	I0717 22:17:05.987567  309853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.key: {Name:mk7a5cf040d94ab9ab46b8687dc6d5f2de78ce75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:05.987632  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 22:17:05.987649  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 22:17:05.987659  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 22:17:05.987671  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 22:17:05.987681  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:17:05.987693  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:17:05.987703  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:17:05.987715  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:17:05.987766  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem (1338 bytes)
	W0717 22:17:05.987802  309853 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642_empty.pem, impossibly tiny 0 bytes
	I0717 22:17:05.987815  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 22:17:05.987837  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:17:05.987865  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:17:05.987897  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem (1679 bytes)
	I0717 22:17:05.987935  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem (1708 bytes)
	I0717 22:17:05.987962  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> /usr/share/ca-certificates/2256422.pem
	I0717 22:17:05.987977  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:05.987988  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem -> /usr/share/ca-certificates/225642.pem
	I0717 22:17:05.988473  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 22:17:06.010170  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 22:17:06.031032  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 22:17:06.051842  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 22:17:06.072449  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:17:06.092739  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 22:17:06.113882  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:17:06.135143  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:17:06.156262  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem --> /usr/share/ca-certificates/2256422.pem (1708 bytes)
	I0717 22:17:06.177412  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:17:06.201632  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem --> /usr/share/ca-certificates/225642.pem (1338 bytes)
	I0717 22:17:06.222694  309853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 22:17:06.238319  309853 ssh_runner.go:195] Run: openssl version
	I0717 22:17:06.243264  309853 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0717 22:17:06.243337  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2256422.pem && ln -fs /usr/share/ca-certificates/2256422.pem /etc/ssl/certs/2256422.pem"
	I0717 22:17:06.252114  309853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2256422.pem
	I0717 22:17:06.255299  309853 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 22:04 /usr/share/ca-certificates/2256422.pem
	I0717 22:17:06.255337  309853 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:04 /usr/share/ca-certificates/2256422.pem
	I0717 22:17:06.255380  309853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2256422.pem
	I0717 22:17:06.261610  309853 command_runner.go:130] > 3ec20f2e
	I0717 22:17:06.261759  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2256422.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:17:06.270406  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:17:06.279376  309853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:06.283202  309853 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:58 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:06.283255  309853 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:58 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:06.283301  309853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:06.289388  309853 command_runner.go:130] > b5213941
	I0717 22:17:06.289551  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:17:06.297970  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/225642.pem && ln -fs /usr/share/ca-certificates/225642.pem /etc/ssl/certs/225642.pem"
	I0717 22:17:06.306485  309853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/225642.pem
	I0717 22:17:06.309746  309853 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 22:04 /usr/share/ca-certificates/225642.pem
	I0717 22:17:06.309793  309853 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:04 /usr/share/ca-certificates/225642.pem
	I0717 22:17:06.309836  309853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/225642.pem
	I0717 22:17:06.315925  309853 command_runner.go:130] > 51391683
	I0717 22:17:06.316103  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/225642.pem /etc/ssl/certs/51391683.0"
	I0717 22:17:06.324523  309853 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:17:06.327563  309853 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:17:06.327602  309853 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:17:06.327637  309853 kubeadm.go:404] StartCluster: {Name:multinode-265316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-265316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:17:06.327714  309853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 22:17:06.327748  309853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 22:17:06.360363  309853 cri.go:89] found id: ""
	I0717 22:17:06.360434  309853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 22:17:06.367919  309853 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0717 22:17:06.367945  309853 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0717 22:17:06.367956  309853 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0717 22:17:06.368662  309853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 22:17:06.376714  309853 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 22:17:06.376795  309853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 22:17:06.384960  309853 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 22:17:06.384993  309853 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 22:17:06.385001  309853 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 22:17:06.385009  309853 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:17:06.385039  309853 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 22:17:06.385080  309853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 22:17:06.428634  309853 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 22:17:06.428683  309853 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0717 22:17:06.428729  309853 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 22:17:06.428742  309853 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 22:17:06.465737  309853 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 22:17:06.465772  309853 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0717 22:17:06.465882  309853 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 22:17:06.465898  309853 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 22:17:06.465955  309853 kubeadm.go:322] OS: Linux
	I0717 22:17:06.465966  309853 command_runner.go:130] > OS: Linux
	I0717 22:17:06.466031  309853 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 22:17:06.466043  309853 command_runner.go:130] > CGROUPS_CPU: enabled
	I0717 22:17:06.466113  309853 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 22:17:06.466124  309853 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0717 22:17:06.466200  309853 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 22:17:06.466213  309853 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0717 22:17:06.466275  309853 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 22:17:06.466285  309853 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0717 22:17:06.466356  309853 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 22:17:06.466368  309853 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0717 22:17:06.466428  309853 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 22:17:06.466449  309853 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0717 22:17:06.466524  309853 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 22:17:06.466533  309853 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0717 22:17:06.466589  309853 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 22:17:06.466596  309853 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0717 22:17:06.466647  309853 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 22:17:06.466663  309853 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0717 22:17:06.528204  309853 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:17:06.528246  309853 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 22:17:06.528345  309853 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:17:06.528357  309853 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 22:17:06.528511  309853 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:17:06.528530  309853 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 22:17:06.726436  309853 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:17:06.726466  309853 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 22:17:06.729860  309853 out.go:204]   - Generating certificates and keys ...
	I0717 22:17:06.729952  309853 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 22:17:06.730006  309853 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 22:17:06.730088  309853 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 22:17:06.730120  309853 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 22:17:06.806137  309853 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 22:17:06.806178  309853 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 22:17:06.990313  309853 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 22:17:06.990352  309853 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0717 22:17:07.337635  309853 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 22:17:07.337671  309853 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0717 22:17:07.465248  309853 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 22:17:07.465294  309853 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0717 22:17:07.553804  309853 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 22:17:07.553833  309853 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0717 22:17:07.553989  309853 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-265316] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 22:17:07.554012  309853 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-265316] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 22:17:07.657471  309853 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 22:17:07.657507  309853 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0717 22:17:07.657702  309853 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-265316] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 22:17:07.657730  309853 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-265316] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 22:17:07.795533  309853 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 22:17:07.795577  309853 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 22:17:07.862980  309853 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 22:17:07.863036  309853 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 22:17:08.046575  309853 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 22:17:08.046614  309853 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0717 22:17:08.046669  309853 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:17:08.046674  309853 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 22:17:08.181769  309853 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:17:08.181808  309853 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 22:17:08.293851  309853 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:17:08.293878  309853 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 22:17:08.479675  309853 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:17:08.479706  309853 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 22:17:08.566300  309853 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:17:08.566341  309853 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 22:17:08.574528  309853 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:17:08.574561  309853 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:17:08.575482  309853 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:17:08.575509  309853 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:17:08.575564  309853 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 22:17:08.575573  309853 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 22:17:08.654324  309853 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:17:08.657033  309853 out.go:204]   - Booting up control plane ...
	I0717 22:17:08.654377  309853 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 22:17:08.657333  309853 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:17:08.657358  309853 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 22:17:08.657665  309853 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:17:08.657691  309853 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 22:17:08.658828  309853 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:17:08.658846  309853 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 22:17:08.659535  309853 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:17:08.659557  309853 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 22:17:08.661878  309853 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:17:08.661907  309853 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 22:17:13.664491  309853 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002436 seconds
	I0717 22:17:13.664523  309853 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002436 seconds
	I0717 22:17:13.664668  309853 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:17:13.664710  309853 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 22:17:13.676452  309853 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:17:13.676490  309853 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 22:17:14.199277  309853 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:17:14.199310  309853 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0717 22:17:14.199667  309853 kubeadm.go:322] [mark-control-plane] Marking the node multinode-265316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:17:14.199701  309853 command_runner.go:130] > [mark-control-plane] Marking the node multinode-265316 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 22:17:14.709240  309853 kubeadm.go:322] [bootstrap-token] Using token: k2lwon.i1huc3viud15knxw
	I0717 22:17:14.710906  309853 out.go:204]   - Configuring RBAC rules ...
	I0717 22:17:14.709368  309853 command_runner.go:130] > [bootstrap-token] Using token: k2lwon.i1huc3viud15knxw
	I0717 22:17:14.711027  309853 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:17:14.711042  309853 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 22:17:14.714677  309853 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:17:14.714694  309853 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 22:17:14.720658  309853 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:17:14.720685  309853 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 22:17:14.723550  309853 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:17:14.723580  309853 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 22:17:14.727462  309853 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:17:14.727489  309853 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 22:17:14.730024  309853 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:17:14.730048  309853 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 22:17:14.740154  309853 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:17:14.740211  309853 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 22:17:14.954694  309853 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 22:17:14.954723  309853 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 22:17:15.176146  309853 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 22:17:15.176185  309853 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 22:17:15.177413  309853 kubeadm.go:322] 
	I0717 22:17:15.177506  309853 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 22:17:15.177520  309853 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0717 22:17:15.177526  309853 kubeadm.go:322] 
	I0717 22:17:15.177621  309853 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 22:17:15.177630  309853 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0717 22:17:15.177635  309853 kubeadm.go:322] 
	I0717 22:17:15.177676  309853 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 22:17:15.177686  309853 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0717 22:17:15.177761  309853 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:17:15.177770  309853 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 22:17:15.177836  309853 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:17:15.177845  309853 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 22:17:15.177850  309853 kubeadm.go:322] 
	I0717 22:17:15.177940  309853 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 22:17:15.177947  309853 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0717 22:17:15.177952  309853 kubeadm.go:322] 
	I0717 22:17:15.178036  309853 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:17:15.178063  309853 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 22:17:15.178071  309853 kubeadm.go:322] 
	I0717 22:17:15.178138  309853 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 22:17:15.178155  309853 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0717 22:17:15.178216  309853 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:17:15.178222  309853 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 22:17:15.178276  309853 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:17:15.178282  309853 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 22:17:15.178285  309853 kubeadm.go:322] 
	I0717 22:17:15.178357  309853 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:17:15.178364  309853 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0717 22:17:15.178437  309853 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 22:17:15.178446  309853 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0717 22:17:15.178452  309853 kubeadm.go:322] 
	I0717 22:17:15.178541  309853 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k2lwon.i1huc3viud15knxw \
	I0717 22:17:15.178548  309853 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token k2lwon.i1huc3viud15knxw \
	I0717 22:17:15.178632  309853 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 \
	I0717 22:17:15.178638  309853 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 \
	I0717 22:17:15.178655  309853 kubeadm.go:322] 	--control-plane 
	I0717 22:17:15.178661  309853 command_runner.go:130] > 	--control-plane 
	I0717 22:17:15.178664  309853 kubeadm.go:322] 
	I0717 22:17:15.178732  309853 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:17:15.178741  309853 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0717 22:17:15.178746  309853 kubeadm.go:322] 
	I0717 22:17:15.178829  309853 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k2lwon.i1huc3viud15knxw \
	I0717 22:17:15.178841  309853 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token k2lwon.i1huc3viud15knxw \
	I0717 22:17:15.178984  309853 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 
	I0717 22:17:15.178994  309853 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 
	I0717 22:17:15.181037  309853 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 22:17:15.181068  309853 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 22:17:15.181227  309853 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:17:15.181250  309853 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:17:15.181283  309853 cni.go:84] Creating CNI manager for ""
	I0717 22:17:15.181293  309853 cni.go:137] 1 nodes found, recommending kindnet
	I0717 22:17:15.183403  309853 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 22:17:15.184990  309853 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:17:15.188616  309853 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 22:17:15.188642  309853 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0717 22:17:15.188655  309853 command_runner.go:130] > Device: 37h/55d	Inode: 2850400     Links: 1
	I0717 22:17:15.188666  309853 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:17:15.188680  309853 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0717 22:17:15.188691  309853 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0717 22:17:15.188704  309853 command_runner.go:130] > Change: 2023-07-17 21:58:26.314622681 +0000
	I0717 22:17:15.188716  309853 command_runner.go:130] >  Birth: 2023-07-17 21:58:26.290621026 +0000
	I0717 22:17:15.188771  309853 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:17:15.188786  309853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:17:15.267211  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:17:15.957497  309853 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0717 22:17:15.957521  309853 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0717 22:17:15.957526  309853 command_runner.go:130] > serviceaccount/kindnet created
	I0717 22:17:15.957530  309853 command_runner.go:130] > daemonset.apps/kindnet created
	I0717 22:17:15.957564  309853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 22:17:15.957692  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8 minikube.k8s.io/name=multinode-265316 minikube.k8s.io/updated_at=2023_07_17T22_17_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:15.957709  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:15.964729  309853 command_runner.go:130] > -16
	I0717 22:17:15.964773  309853 ops.go:34] apiserver oom_adj: -16
	I0717 22:17:16.060656  309853 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0717 22:17:16.060850  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:16.062708  309853 command_runner.go:130] > node/multinode-265316 labeled
	I0717 22:17:16.131061  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:16.631781  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:16.695278  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:17.131953  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:17.193492  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:17.631640  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:17.693009  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:18.131941  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:18.194237  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:18.632270  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:18.693537  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:19.131815  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:19.191594  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:19.631657  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:19.693043  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:20.132323  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:20.196134  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:20.632245  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:20.692108  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:21.131909  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:21.195455  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:21.632084  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:21.695924  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:22.132051  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:22.194267  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:22.631215  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:22.696273  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:23.131916  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:23.192583  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:23.631593  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:23.695943  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:24.131525  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:24.195519  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:24.631548  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:24.692783  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:25.132042  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:25.197807  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:25.631324  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:25.695435  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:26.132210  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:26.196733  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:26.631373  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:26.696709  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:27.132268  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:27.198135  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:27.631764  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:27.696903  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:28.131360  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:28.197612  309853 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 22:17:28.632308  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 22:17:28.696070  309853 command_runner.go:130] > NAME      SECRETS   AGE
	I0717 22:17:28.696096  309853 command_runner.go:130] > default   0         0s
	I0717 22:17:28.698872  309853 kubeadm.go:1081] duration metric: took 12.741217069s to wait for elevateKubeSystemPrivileges.
	I0717 22:17:28.698904  309853 kubeadm.go:406] StartCluster complete in 22.371271233s
	I0717 22:17:28.698921  309853 settings.go:142] acquiring lock: {Name:mkd04bbc59ef11ead8108410e404fcf464b56f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:28.698996  309853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:17:28.699718  309853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16899-218877/kubeconfig: {Name:mkbb3c2ee0d4a9dc4a5c436ca7b4ee88dbc131b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:28.699942  309853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 22:17:28.700061  309853 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 22:17:28.700166  309853 addons.go:69] Setting storage-provisioner=true in profile "multinode-265316"
	I0717 22:17:28.700180  309853 config.go:182] Loaded profile config "multinode-265316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:17:28.700190  309853 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:17:28.700180  309853 addons.go:69] Setting default-storageclass=true in profile "multinode-265316"
	I0717 22:17:28.700285  309853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-265316"
	I0717 22:17:28.700185  309853 addons.go:231] Setting addon storage-provisioner=true in "multinode-265316"
	I0717 22:17:28.700427  309853 host.go:66] Checking if "multinode-265316" exists ...
	I0717 22:17:28.700493  309853 kapi.go:59] client config for multinode-265316: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.key", CAFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:17:28.700788  309853 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Status}}
	I0717 22:17:28.700943  309853 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Status}}
	I0717 22:17:28.701245  309853 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 22:17:28.701520  309853 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:17:28.701534  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:28.701546  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:28.701555  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:28.711100  309853 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 22:17:28.711127  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:28.711135  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:28.711141  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:28.711146  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:28.711151  309853 round_trippers.go:580]     Content-Length: 291
	I0717 22:17:28.711157  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:28 GMT
	I0717 22:17:28.711165  309853 round_trippers.go:580]     Audit-Id: 3c7fb670-585d-4c30-8088-1ecad25bb76f
	I0717 22:17:28.711179  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:28.711216  309853 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8cd30164-7039-470f-9b7c-62d4569467c0","resourceVersion":"257","creationTimestamp":"2023-07-17T22:17:14Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 22:17:28.711666  309853 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8cd30164-7039-470f-9b7c-62d4569467c0","resourceVersion":"257","creationTimestamp":"2023-07-17T22:17:14Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 22:17:28.711731  309853 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:17:28.711743  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:28.711755  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:28.711767  309853 round_trippers.go:473]     Content-Type: application/json
	I0717 22:17:28.711775  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:28.719177  309853 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 22:17:28.719211  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:28.719224  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:28.719233  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:28.719241  309853 round_trippers.go:580]     Content-Length: 291
	I0717 22:17:28.719249  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:28 GMT
	I0717 22:17:28.719258  309853 round_trippers.go:580]     Audit-Id: d2f35ff3-967e-441d-9795-b7b8e50bc971
	I0717 22:17:28.719271  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:28.719280  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:28.719314  309853 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8cd30164-7039-470f-9b7c-62d4569467c0","resourceVersion":"339","creationTimestamp":"2023-07-17T22:17:14Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 22:17:28.722955  309853 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 22:17:28.721789  309853 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:17:28.724636  309853 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:17:28.724658  309853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 22:17:28.724731  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:28.724860  309853 kapi.go:59] client config for multinode-265316: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.key", CAFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:17:28.725297  309853 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 22:17:28.725314  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:28.725326  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:28.725337  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:28.727916  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:28.727938  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:28.727952  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:28 GMT
	I0717 22:17:28.727961  309853 round_trippers.go:580]     Audit-Id: 655a478a-d138-405f-bf2f-7224722be9c4
	I0717 22:17:28.727974  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:28.727986  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:28.727999  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:28.728007  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:28.728020  309853 round_trippers.go:580]     Content-Length: 109
	I0717 22:17:28.728054  309853 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"339"},"items":[]}
	I0717 22:17:28.728339  309853 addons.go:231] Setting addon default-storageclass=true in "multinode-265316"
	I0717 22:17:28.728382  309853 host.go:66] Checking if "multinode-265316" exists ...
	I0717 22:17:28.728837  309853 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Status}}
	I0717 22:17:28.745805  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:17:28.748439  309853 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 22:17:28.748462  309853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 22:17:28.748518  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:28.768127  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:17:28.777876  309853 command_runner.go:130] > apiVersion: v1
	I0717 22:17:28.777908  309853 command_runner.go:130] > data:
	I0717 22:17:28.777917  309853 command_runner.go:130] >   Corefile: |
	I0717 22:17:28.777923  309853 command_runner.go:130] >     .:53 {
	I0717 22:17:28.777930  309853 command_runner.go:130] >         errors
	I0717 22:17:28.777940  309853 command_runner.go:130] >         health {
	I0717 22:17:28.777948  309853 command_runner.go:130] >            lameduck 5s
	I0717 22:17:28.777955  309853 command_runner.go:130] >         }
	I0717 22:17:28.777962  309853 command_runner.go:130] >         ready
	I0717 22:17:28.777973  309853 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0717 22:17:28.777984  309853 command_runner.go:130] >            pods insecure
	I0717 22:17:28.777994  309853 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0717 22:17:28.778004  309853 command_runner.go:130] >            ttl 30
	I0717 22:17:28.778008  309853 command_runner.go:130] >         }
	I0717 22:17:28.778012  309853 command_runner.go:130] >         prometheus :9153
	I0717 22:17:28.778018  309853 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0717 22:17:28.778023  309853 command_runner.go:130] >            max_concurrent 1000
	I0717 22:17:28.778026  309853 command_runner.go:130] >         }
	I0717 22:17:28.778030  309853 command_runner.go:130] >         cache 30
	I0717 22:17:28.778036  309853 command_runner.go:130] >         loop
	I0717 22:17:28.778041  309853 command_runner.go:130] >         reload
	I0717 22:17:28.778051  309853 command_runner.go:130] >         loadbalance
	I0717 22:17:28.778057  309853 command_runner.go:130] >     }
	I0717 22:17:28.778067  309853 command_runner.go:130] > kind: ConfigMap
	I0717 22:17:28.778073  309853 command_runner.go:130] > metadata:
	I0717 22:17:28.778090  309853 command_runner.go:130] >   creationTimestamp: "2023-07-17T22:17:14Z"
	I0717 22:17:28.778100  309853 command_runner.go:130] >   name: coredns
	I0717 22:17:28.778107  309853 command_runner.go:130] >   namespace: kube-system
	I0717 22:17:28.778115  309853 command_runner.go:130] >   resourceVersion: "253"
	I0717 22:17:28.778123  309853 command_runner.go:130] >   uid: 8f940fbc-c3d5-4b21-ae19-e94e194ced01
	I0717 22:17:28.778331  309853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 22:17:28.881662  309853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 22:17:28.882030  309853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 22:17:29.220502  309853 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:17:29.220525  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:29.220533  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:29.220539  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:29.261949  309853 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I0717 22:17:29.261986  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:29.261998  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:29.262006  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:29.262014  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:29.262022  309853 round_trippers.go:580]     Content-Length: 291
	I0717 22:17:29.262030  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:29 GMT
	I0717 22:17:29.262039  309853 round_trippers.go:580]     Audit-Id: ace57767-20be-418a-824c-544078458613
	I0717 22:17:29.262058  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:29.262692  309853 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8cd30164-7039-470f-9b7c-62d4569467c0","resourceVersion":"356","creationTimestamp":"2023-07-17T22:17:14Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 22:17:29.262865  309853 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-265316" context rescaled to 1 replicas
	I0717 22:17:29.262907  309853 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 22:17:29.265505  309853 out.go:177] * Verifying Kubernetes components...
	I0717 22:17:29.267150  309853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:17:29.470716  309853 command_runner.go:130] > configmap/coredns replaced
	I0717 22:17:29.472149  309853 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0717 22:17:29.816726  309853 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0717 22:17:29.870014  309853 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0717 22:17:29.879430  309853 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 22:17:29.889554  309853 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 22:17:29.904969  309853 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0717 22:17:29.967350  309853 command_runner.go:130] > pod/storage-provisioner created
	I0717 22:17:29.973073  309853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091339745s)
	I0717 22:17:29.973131  309853 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0717 22:17:29.973145  309853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.091095305s)
	I0717 22:17:29.974835  309853 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 22:17:29.973632  309853 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:17:29.976408  309853 kapi.go:59] client config for multinode-265316: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.key", CAFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:17:29.976660  309853 node_ready.go:35] waiting up to 6m0s for node "multinode-265316" to be "Ready" ...
	I0717 22:17:29.976722  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:29.976730  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:29.976737  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:29.976743  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:29.976834  309853 addons.go:502] enable addons completed in 1.276776702s: enabled=[storage-provisioner default-storageclass]
	I0717 22:17:29.978851  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:29.978865  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:29.978873  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:29 GMT
	I0717 22:17:29.978878  309853 round_trippers.go:580]     Audit-Id: ee6aea8a-f368-4bd0-83cd-7797c831e307
	I0717 22:17:29.978884  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:29.978896  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:29.978902  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:29.978907  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:29.979255  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"361","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 22:17:30.480694  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:30.480717  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:30.480731  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:30.480737  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:30.483640  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:30.483660  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:30.483668  309853 round_trippers.go:580]     Audit-Id: 75068b9a-61a3-4785-8739-3d086b3ac176
	I0717 22:17:30.483677  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:30.483685  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:30.483694  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:30.483702  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:30.483715  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:30 GMT
	I0717 22:17:30.483856  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"361","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 22:17:30.980454  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:30.980479  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:30.980492  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:30.980504  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:30.982980  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:30.983010  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:30.983020  309853 round_trippers.go:580]     Audit-Id: 78c18430-5334-4e26-8b71-77ff2c846443
	I0717 22:17:30.983026  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:30.983032  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:30.983037  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:30.983043  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:30.983048  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:30 GMT
	I0717 22:17:30.983139  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"361","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 22:17:31.480808  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:31.480830  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:31.480839  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:31.480845  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:31.483241  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:31.483268  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:31.483280  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:31 GMT
	I0717 22:17:31.483289  309853 round_trippers.go:580]     Audit-Id: 58414aa9-dcd6-4693-9f70-b6ae4a82c6f9
	I0717 22:17:31.483296  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:31.483304  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:31.483312  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:31.483321  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:31.483470  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"361","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 22:17:31.980792  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:31.980814  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:31.980822  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:31.980827  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:31.983208  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:31.983226  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:31.983234  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:31.983240  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:31.983245  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:31.983251  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:31 GMT
	I0717 22:17:31.983256  309853 round_trippers.go:580]     Audit-Id: df4de362-72a8-4f4f-9bda-ba1257c8d1ef
	I0717 22:17:31.983262  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:31.983345  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:31.983689  309853 node_ready.go:49] node "multinode-265316" has status "Ready":"True"
	I0717 22:17:31.983704  309853 node_ready.go:38] duration metric: took 2.007030575s waiting for node "multinode-265316" to be "Ready" ...
	I0717 22:17:31.983713  309853 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:17:31.983771  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:17:31.983778  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:31.983786  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:31.983792  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:31.986930  309853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:31.986956  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:31.986968  309853 round_trippers.go:580]     Audit-Id: 4f45f4e2-0fde-41c8-bb89-0313f32a5e36
	I0717 22:17:31.986978  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:31.986987  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:31.986999  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:31.987009  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:31.987022  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:31 GMT
	I0717 22:17:31.987562  309853 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"419","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0717 22:17:31.990733  309853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-s4bbn" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:31.990816  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-s4bbn
	I0717 22:17:31.990825  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:31.990833  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:31.990841  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:31.993007  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:31.993026  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:31.993036  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:31.993045  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:31.993053  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:31.993061  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:31 GMT
	I0717 22:17:31.993069  309853 round_trippers.go:580]     Audit-Id: b45e96c9-47b2-461e-959f-586eaf8de67b
	I0717 22:17:31.993076  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:31.993259  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"419","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0717 22:17:31.993710  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:31.993724  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:31.993731  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:31.993739  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:31.995623  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:31.995673  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:31.995694  309853 round_trippers.go:580]     Audit-Id: 4f341967-86eb-4ae4-b416-640698ba8392
	I0717 22:17:31.995708  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:31.995717  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:31.995727  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:31.995738  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:31.995746  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:31 GMT
	I0717 22:17:31.995865  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:32.496685  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-s4bbn
	I0717 22:17:32.496710  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:32.496718  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:32.496725  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:32.499092  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:32.499112  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:32.499121  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:32.499130  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:32.499137  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:32.499145  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:32.499153  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:32 GMT
	I0717 22:17:32.499161  309853 round_trippers.go:580]     Audit-Id: 65b85892-6074-49c1-9fa3-569818c82af7
	I0717 22:17:32.499274  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"431","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0717 22:17:32.499820  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:32.499835  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:32.499842  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:32.499849  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:32.501838  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:32.501855  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:32.501861  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:32.501867  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:32.501874  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:32.501886  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:32.501902  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:32 GMT
	I0717 22:17:32.501911  309853 round_trippers.go:580]     Audit-Id: 90196a83-e653-46b7-8669-af5632e70cd2
	I0717 22:17:32.502009  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:32.996621  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-s4bbn
	I0717 22:17:32.996663  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:32.996672  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:32.996678  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:32.999213  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:32.999238  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:32.999248  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:32.999257  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:32.999263  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:32 GMT
	I0717 22:17:32.999271  309853 round_trippers.go:580]     Audit-Id: ff7c0145-3db0-409c-9e7e-a33abb07b228
	I0717 22:17:32.999280  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:32.999289  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:32.999405  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"431","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0717 22:17:32.999876  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:32.999890  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:32.999898  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:32.999905  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.001930  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:33.001949  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.001956  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.001962  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.001967  309853 round_trippers.go:580]     Audit-Id: 95462b1b-8b30-41ef-a21b-7444e8aee44d
	I0717 22:17:33.001973  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.001978  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.001986  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.002096  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:33.496735  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-s4bbn
	I0717 22:17:33.496765  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.496774  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.496780  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.499104  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:33.499124  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.499131  309853 round_trippers.go:580]     Audit-Id: 12ab2f74-93cc-4a57-ac63-7effab1748f0
	I0717 22:17:33.499137  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.499142  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.499148  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.499157  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.499168  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.499295  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"435","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0717 22:17:33.499797  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:33.499811  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.499819  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.499825  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.501717  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:33.501737  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.501746  309853 round_trippers.go:580]     Audit-Id: 1faa2a90-2512-4547-8cee-7517e2a0f31b
	I0717 22:17:33.501755  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.501763  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.501773  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.501789  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.501798  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.501886  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:33.502220  309853 pod_ready.go:92] pod "coredns-5d78c9869d-s4bbn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:33.502237  309853 pod_ready.go:81] duration metric: took 1.511482646s waiting for pod "coredns-5d78c9869d-s4bbn" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.502253  309853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.502318  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-265316
	I0717 22:17:33.502327  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.502339  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.502352  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.504169  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:33.504191  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.504202  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.504212  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.504224  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.504235  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.504244  309853 round_trippers.go:580]     Audit-Id: 3bbdda33-fcbb-44b6-bfcc-108da0508eac
	I0717 22:17:33.504259  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.504360  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-265316","namespace":"kube-system","uid":"7ce134e3-d832-431a-acea-e9c06ceab0df","resourceVersion":"297","creationTimestamp":"2023-07-17T22:17:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3a8059633c8e010933f854a50f12fcb2","kubernetes.io/config.mirror":"3a8059633c8e010933f854a50f12fcb2","kubernetes.io/config.seen":"2023-07-17T22:17:14.996579724Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0717 22:17:33.504714  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:33.504730  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.504737  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.504743  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.506354  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:33.506371  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.506380  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.506388  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.506396  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.506406  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.506422  309853 round_trippers.go:580]     Audit-Id: 1cf9c675-25f5-4ba3-8ad0-31752f731170
	I0717 22:17:33.506431  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.506507  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:33.506790  309853 pod_ready.go:92] pod "etcd-multinode-265316" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:33.506805  309853 pod_ready.go:81] duration metric: took 4.541753ms waiting for pod "etcd-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.506821  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.506871  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-265316
	I0717 22:17:33.506881  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.506891  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.506904  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.508774  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:33.508790  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.508796  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.508801  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.508806  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.508811  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.508817  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.508822  309853 round_trippers.go:580]     Audit-Id: bed78445-0a4a-43de-86be-d78f5036380e
	I0717 22:17:33.508993  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-265316","namespace":"kube-system","uid":"edf5311d-73b9-42b7-8847-62fa9c8eea08","resourceVersion":"290","creationTimestamp":"2023-07-17T22:17:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"402b82e1fb62f32056e02b21a5a14992","kubernetes.io/config.mirror":"402b82e1fb62f32056e02b21a5a14992","kubernetes.io/config.seen":"2023-07-17T22:17:14.996585862Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0717 22:17:33.509403  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:33.509415  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.509422  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.509430  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.512639  309853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:33.512662  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.512672  309853 round_trippers.go:580]     Audit-Id: fa34ad39-4293-4f59-9880-67ca286da969
	I0717 22:17:33.512687  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.512696  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.512705  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.512723  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.512737  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.512871  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:33.513135  309853 pod_ready.go:92] pod "kube-apiserver-multinode-265316" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:33.513147  309853 pod_ready.go:81] duration metric: took 6.316836ms waiting for pod "kube-apiserver-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.513156  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.513202  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-265316
	I0717 22:17:33.513209  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.513216  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.513222  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.514835  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:33.514850  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.514857  309853 round_trippers.go:580]     Audit-Id: 281432a8-ab83-413c-b00f-c07d049edeac
	I0717 22:17:33.514865  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.514874  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.514884  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.514893  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.514900  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.515026  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-265316","namespace":"kube-system","uid":"9f8c70c0-fa65-45e8-8531-94ce623ede94","resourceVersion":"294","creationTimestamp":"2023-07-17T22:17:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a41106563a620a43d8654372c53a786e","kubernetes.io/config.mirror":"a41106563a620a43d8654372c53a786e","kubernetes.io/config.seen":"2023-07-17T22:17:14.996587459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0717 22:17:33.515439  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:33.515451  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.515458  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.515464  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.516922  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:33.516936  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.516944  309853 round_trippers.go:580]     Audit-Id: a60f701c-17df-464f-8143-35b84f8e37d0
	I0717 22:17:33.516953  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.516966  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.516979  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.516988  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.516997  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.517094  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:33.517391  309853 pod_ready.go:92] pod "kube-controller-manager-multinode-265316" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:33.517405  309853 pod_ready.go:81] duration metric: took 4.243714ms waiting for pod "kube-controller-manager-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.517416  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4cxgd" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.517477  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cxgd
	I0717 22:17:33.517487  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.517495  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.517508  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.519056  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:33.519074  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.519083  309853 round_trippers.go:580]     Audit-Id: f862aea9-0b4e-49f7-9ce3-e04350a715b1
	I0717 22:17:33.519092  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.519105  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.519114  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.519125  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.519134  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.519218  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4cxgd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1297fe2e-d86e-4494-a6ac-e8b95b9ef84a","resourceVersion":"408","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ef51c4db-c46d-498c-ae8b-747d67715984","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef51c4db-c46d-498c-ae8b-747d67715984\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0717 22:17:33.580794  309853 request.go:628] Waited for 61.149495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:33.580873  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:33.580883  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.580894  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.580907  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.583217  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:33.583237  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.583245  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.583254  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.583263  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.583273  309853 round_trippers.go:580]     Audit-Id: 7f1a19ee-408b-4c8f-b990-f7331781d10b
	I0717 22:17:33.583284  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.583293  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.583461  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:33.583843  309853 pod_ready.go:92] pod "kube-proxy-4cxgd" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:33.583865  309853 pod_ready.go:81] duration metric: took 66.435194ms waiting for pod "kube-proxy-4cxgd" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.583878  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.781373  309853 request.go:628] Waited for 197.392628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-265316
	I0717 22:17:33.781435  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-265316
	I0717 22:17:33.781439  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.781448  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.781454  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.783949  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:33.783968  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.783978  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.783988  309853 round_trippers.go:580]     Audit-Id: 9ac6fcd7-5688-4029-a62c-3b2779f572de
	I0717 22:17:33.783996  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.784008  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.784021  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.784033  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.784183  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-265316","namespace":"kube-system","uid":"b0e68345-3086-4b83-ab1c-d654d72eba7e","resourceVersion":"293","creationTimestamp":"2023-07-17T22:17:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c9df8caab23fcf6726bdc907a7b4503e","kubernetes.io/config.mirror":"c9df8caab23fcf6726bdc907a7b4503e","kubernetes.io/config.seen":"2023-07-17T22:17:14.996589459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0717 22:17:33.980879  309853 request.go:628] Waited for 196.19697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:33.980942  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:33.980948  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.980959  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.980976  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:33.983367  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:33.983392  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:33.983402  309853 round_trippers.go:580]     Audit-Id: 06cd1007-6f0c-41ad-95fd-3b4dd3a67914
	I0717 22:17:33.983431  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:33.983440  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:33.983449  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:33.983458  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:33.983471  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:33 GMT
	I0717 22:17:33.983571  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:33.983875  309853 pod_ready.go:92] pod "kube-scheduler-multinode-265316" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:33.983884  309853 pod_ready.go:81] duration metric: took 399.998469ms waiting for pod "kube-scheduler-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:33.983893  309853 pod_ready.go:38] duration metric: took 2.000169696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:17:33.983908  309853 api_server.go:52] waiting for apiserver process to appear ...
	I0717 22:17:33.983951  309853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:17:33.993799  309853 command_runner.go:130] > 1391
	I0717 22:17:33.994452  309853 api_server.go:72] duration metric: took 4.731500917s to wait for apiserver process to appear ...
	I0717 22:17:33.994470  309853 api_server.go:88] waiting for apiserver healthz status ...
	I0717 22:17:33.994491  309853 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 22:17:33.999610  309853 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 22:17:33.999670  309853 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0717 22:17:33.999678  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:33.999686  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:33.999695  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:34.000575  309853 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 22:17:34.000592  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:34.000599  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:34.000605  309853 round_trippers.go:580]     Content-Length: 263
	I0717 22:17:34.000610  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:34 GMT
	I0717 22:17:34.000618  309853 round_trippers.go:580]     Audit-Id: 096a5645-e88c-4ec6-8e09-b6a5708a8589
	I0717 22:17:34.000623  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:34.000631  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:34.000639  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:34.000656  309853 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 22:17:34.000730  309853 api_server.go:141] control plane version: v1.27.3
	I0717 22:17:34.000743  309853 api_server.go:131] duration metric: took 6.268114ms to wait for apiserver health ...
	I0717 22:17:34.000754  309853 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 22:17:34.181160  309853 request.go:628] Waited for 180.323761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:17:34.181211  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:17:34.181216  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:34.181228  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:34.181234  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:34.184939  309853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:34.184967  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:34.184979  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:34.184989  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:34.184999  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:34 GMT
	I0717 22:17:34.185012  309853 round_trippers.go:580]     Audit-Id: 7d37fe04-9396-46c6-b581-a8bc55d2aefd
	I0717 22:17:34.185021  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:34.185033  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:34.185482  309853 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"435","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0717 22:17:34.187215  309853 system_pods.go:59] 8 kube-system pods found
	I0717 22:17:34.187246  309853 system_pods.go:61] "coredns-5d78c9869d-s4bbn" [f5bd6a07-4ec0-46cb-8b1e-ef5178a23919] Running
	I0717 22:17:34.187254  309853 system_pods.go:61] "etcd-multinode-265316" [7ce134e3-d832-431a-acea-e9c06ceab0df] Running
	I0717 22:17:34.187261  309853 system_pods.go:61] "kindnet-29cp4" [00ea3df0-45d7-4c70-838e-12f1b43f9179] Running
	I0717 22:17:34.187268  309853 system_pods.go:61] "kube-apiserver-multinode-265316" [edf5311d-73b9-42b7-8847-62fa9c8eea08] Running
	I0717 22:17:34.187273  309853 system_pods.go:61] "kube-controller-manager-multinode-265316" [9f8c70c0-fa65-45e8-8531-94ce623ede94] Running
	I0717 22:17:34.187277  309853 system_pods.go:61] "kube-proxy-4cxgd" [1297fe2e-d86e-4494-a6ac-e8b95b9ef84a] Running
	I0717 22:17:34.187282  309853 system_pods.go:61] "kube-scheduler-multinode-265316" [b0e68345-3086-4b83-ab1c-d654d72eba7e] Running
	I0717 22:17:34.187286  309853 system_pods.go:61] "storage-provisioner" [030a51da-3ea5-4a58-8f1b-452efc02de5c] Running
	I0717 22:17:34.187295  309853 system_pods.go:74] duration metric: took 186.534043ms to wait for pod list to return data ...
	I0717 22:17:34.187305  309853 default_sa.go:34] waiting for default service account to be created ...
	I0717 22:17:34.381767  309853 request.go:628] Waited for 194.355005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 22:17:34.381820  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 22:17:34.381825  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:34.381833  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:34.381839  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:34.384295  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:34.384315  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:34.384322  309853 round_trippers.go:580]     Audit-Id: 0f75cf14-894f-4784-b2b7-de2f14a6e42f
	I0717 22:17:34.384328  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:34.384334  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:34.384339  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:34.384345  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:34.384351  309853 round_trippers.go:580]     Content-Length: 261
	I0717 22:17:34.384359  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:34 GMT
	I0717 22:17:34.384382  309853 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"48e4bac8-0ffd-4a9e-9fc7-2cdc91264ba7","resourceVersion":"334","creationTimestamp":"2023-07-17T22:17:28Z"}}]}
	I0717 22:17:34.384584  309853 default_sa.go:45] found service account: "default"
	I0717 22:17:34.384599  309853 default_sa.go:55] duration metric: took 197.287469ms for default service account to be created ...
	I0717 22:17:34.384607  309853 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 22:17:34.581022  309853 request.go:628] Waited for 196.320886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:17:34.581083  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:17:34.581088  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:34.581096  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:34.581102  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:34.584603  309853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:34.584632  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:34.584641  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:34.584646  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:34.584652  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:34.584658  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:34 GMT
	I0717 22:17:34.584667  309853 round_trippers.go:580]     Audit-Id: c9c0dbd2-b658-4ef9-a74d-97f90be729cd
	I0717 22:17:34.584673  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:34.585092  309853 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"435","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0717 22:17:34.586817  309853 system_pods.go:86] 8 kube-system pods found
	I0717 22:17:34.586837  309853 system_pods.go:89] "coredns-5d78c9869d-s4bbn" [f5bd6a07-4ec0-46cb-8b1e-ef5178a23919] Running
	I0717 22:17:34.586842  309853 system_pods.go:89] "etcd-multinode-265316" [7ce134e3-d832-431a-acea-e9c06ceab0df] Running
	I0717 22:17:34.586846  309853 system_pods.go:89] "kindnet-29cp4" [00ea3df0-45d7-4c70-838e-12f1b43f9179] Running
	I0717 22:17:34.586850  309853 system_pods.go:89] "kube-apiserver-multinode-265316" [edf5311d-73b9-42b7-8847-62fa9c8eea08] Running
	I0717 22:17:34.586854  309853 system_pods.go:89] "kube-controller-manager-multinode-265316" [9f8c70c0-fa65-45e8-8531-94ce623ede94] Running
	I0717 22:17:34.586858  309853 system_pods.go:89] "kube-proxy-4cxgd" [1297fe2e-d86e-4494-a6ac-e8b95b9ef84a] Running
	I0717 22:17:34.586862  309853 system_pods.go:89] "kube-scheduler-multinode-265316" [b0e68345-3086-4b83-ab1c-d654d72eba7e] Running
	I0717 22:17:34.586867  309853 system_pods.go:89] "storage-provisioner" [030a51da-3ea5-4a58-8f1b-452efc02de5c] Running
	I0717 22:17:34.586876  309853 system_pods.go:126] duration metric: took 202.262694ms to wait for k8s-apps to be running ...
	I0717 22:17:34.586890  309853 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:17:34.586933  309853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:17:34.597530  309853 system_svc.go:56] duration metric: took 10.630647ms WaitForService to wait for kubelet.
	I0717 22:17:34.597553  309853 kubeadm.go:581] duration metric: took 5.334604696s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:17:34.597574  309853 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:17:34.780939  309853 request.go:628] Waited for 183.267556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0717 22:17:34.780993  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0717 22:17:34.780998  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:34.781006  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:34.781012  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:34.783643  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:34.783665  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:34.783675  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:34.783684  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:34.783691  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:34.783698  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:34.783709  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:34 GMT
	I0717 22:17:34.783717  309853 round_trippers.go:580]     Audit-Id: 9bfd3bd4-a549-400d-870f-7d049c30b8af
	I0717 22:17:34.783865  309853 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0717 22:17:34.784274  309853 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 22:17:34.784296  309853 node_conditions.go:123] node cpu capacity is 8
	I0717 22:17:34.784309  309853 node_conditions.go:105] duration metric: took 186.727591ms to run NodePressure ...
	I0717 22:17:34.784323  309853 start.go:228] waiting for startup goroutines ...
	I0717 22:17:34.784335  309853 start.go:233] waiting for cluster config update ...
	I0717 22:17:34.784358  309853 start.go:242] writing updated cluster config ...
	I0717 22:17:34.787177  309853 out.go:177] 
	I0717 22:17:34.789000  309853 config.go:182] Loaded profile config "multinode-265316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:17:34.789096  309853 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/config.json ...
	I0717 22:17:34.791085  309853 out.go:177] * Starting worker node multinode-265316-m02 in cluster multinode-265316
	I0717 22:17:34.792569  309853 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 22:17:34.794041  309853 out.go:177] * Pulling base image ...
	I0717 22:17:34.795808  309853 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:17:34.795834  309853 cache.go:57] Caching tarball of preloaded images
	I0717 22:17:34.795891  309853 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:17:34.795927  309853 preload.go:174] Found /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 22:17:34.795943  309853 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 22:17:34.796039  309853 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/config.json ...
	I0717 22:17:34.811446  309853 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 22:17:34.811472  309853 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 22:17:34.811489  309853 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:17:34.811526  309853 start.go:365] acquiring machines lock for multinode-265316-m02: {Name:mk87880c61c3edef6a71afa137d176750210d607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:17:34.811642  309853 start.go:369] acquired machines lock for "multinode-265316-m02" in 89.376µs
	I0717 22:17:34.811675  309853 start.go:93] Provisioning new machine with config: &{Name:multinode-265316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-265316 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:17:34.811781  309853 start.go:125] createHost starting for "m02" (driver="docker")
	I0717 22:17:34.813858  309853 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 22:17:34.813972  309853 start.go:159] libmachine.API.Create for "multinode-265316" (driver="docker")
	I0717 22:17:34.814010  309853 client.go:168] LocalClient.Create starting
	I0717 22:17:34.814090  309853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem
	I0717 22:17:34.814128  309853 main.go:141] libmachine: Decoding PEM data...
	I0717 22:17:34.814184  309853 main.go:141] libmachine: Parsing certificate...
	I0717 22:17:34.814262  309853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem
	I0717 22:17:34.814290  309853 main.go:141] libmachine: Decoding PEM data...
	I0717 22:17:34.814304  309853 main.go:141] libmachine: Parsing certificate...
	I0717 22:17:34.814553  309853 cli_runner.go:164] Run: docker network inspect multinode-265316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:17:34.829109  309853 network_create.go:76] Found existing network {name:multinode-265316 subnet:0xc00123b8c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0717 22:17:34.829145  309853 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-265316-m02" container
	I0717 22:17:34.829206  309853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 22:17:34.843745  309853 cli_runner.go:164] Run: docker volume create multinode-265316-m02 --label name.minikube.sigs.k8s.io=multinode-265316-m02 --label created_by.minikube.sigs.k8s.io=true
	I0717 22:17:34.859874  309853 oci.go:103] Successfully created a docker volume multinode-265316-m02
	I0717 22:17:34.859950  309853 cli_runner.go:164] Run: docker run --rm --name multinode-265316-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-265316-m02 --entrypoint /usr/bin/test -v multinode-265316-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 22:17:35.392406  309853 oci.go:107] Successfully prepared a docker volume multinode-265316-m02
	I0717 22:17:35.392441  309853 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 22:17:35.392466  309853 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 22:17:35.392521  309853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-265316-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 22:17:40.211122  309853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-265316-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.818556926s)
	I0717 22:17:40.211155  309853 kic.go:199] duration metric: took 4.818685 seconds to extract preloaded images to volume
	W0717 22:17:40.211314  309853 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 22:17:40.211404  309853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 22:17:40.261965  309853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-265316-m02 --name multinode-265316-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-265316-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-265316-m02 --network multinode-265316 --ip 192.168.58.3 --volume multinode-265316-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:17:40.561165  309853 cli_runner.go:164] Run: docker container inspect multinode-265316-m02 --format={{.State.Running}}
	I0717 22:17:40.577986  309853 cli_runner.go:164] Run: docker container inspect multinode-265316-m02 --format={{.State.Status}}
	I0717 22:17:40.596107  309853 cli_runner.go:164] Run: docker exec multinode-265316-m02 stat /var/lib/dpkg/alternatives/iptables
	I0717 22:17:40.641164  309853 oci.go:144] the created container "multinode-265316-m02" has a running status.
	I0717 22:17:40.641203  309853 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa...
	I0717 22:17:40.956766  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 22:17:40.956814  309853 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 22:17:40.996930  309853 cli_runner.go:164] Run: docker container inspect multinode-265316-m02 --format={{.State.Status}}
	I0717 22:17:41.016144  309853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 22:17:41.016166  309853 kic_runner.go:114] Args: [docker exec --privileged multinode-265316-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 22:17:41.082574  309853 cli_runner.go:164] Run: docker container inspect multinode-265316-m02 --format={{.State.Status}}
	I0717 22:17:41.104835  309853 machine.go:88] provisioning docker machine ...
	I0717 22:17:41.104875  309853 ubuntu.go:169] provisioning hostname "multinode-265316-m02"
	I0717 22:17:41.104942  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:17:41.124877  309853 main.go:141] libmachine: Using SSH client type: native
	I0717 22:17:41.125294  309853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0717 22:17:41.125308  309853 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-265316-m02 && echo "multinode-265316-m02" | sudo tee /etc/hostname
	I0717 22:17:41.293923  309853 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-265316-m02
	
	I0717 22:17:41.293994  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:17:41.311821  309853 main.go:141] libmachine: Using SSH client type: native
	I0717 22:17:41.312233  309853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0717 22:17:41.312252  309853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-265316-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-265316-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-265316-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:17:41.435357  309853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:17:41.435386  309853 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-218877/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-218877/.minikube}
	I0717 22:17:41.435406  309853 ubuntu.go:177] setting up certificates
	I0717 22:17:41.435431  309853 provision.go:83] configureAuth start
	I0717 22:17:41.435485  309853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265316-m02
	I0717 22:17:41.451491  309853 provision.go:138] copyHostCerts
	I0717 22:17:41.451530  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem
	I0717 22:17:41.451560  309853 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem, removing ...
	I0717 22:17:41.451568  309853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem
	I0717 22:17:41.451630  309853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem (1123 bytes)
	I0717 22:17:41.451707  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem
	I0717 22:17:41.451725  309853 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem, removing ...
	I0717 22:17:41.451732  309853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem
	I0717 22:17:41.451755  309853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem (1679 bytes)
	I0717 22:17:41.451796  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem
	I0717 22:17:41.451811  309853 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem, removing ...
	I0717 22:17:41.451817  309853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem
	I0717 22:17:41.451836  309853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem (1078 bytes)
	I0717 22:17:41.451882  309853 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem org=jenkins.multinode-265316-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-265316-m02]
	I0717 22:17:41.605220  309853 provision.go:172] copyRemoteCerts
	I0717 22:17:41.605282  309853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:17:41.605317  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:17:41.621136  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa Username:docker}
	I0717 22:17:41.716019  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 22:17:41.716082  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:17:41.737026  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 22:17:41.737079  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 22:17:41.757395  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 22:17:41.757453  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 22:17:41.777642  309853 provision.go:86] duration metric: configureAuth took 342.199648ms
	I0717 22:17:41.777666  309853 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:17:41.777836  309853 config.go:182] Loaded profile config "multinode-265316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:17:41.777935  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:17:41.797421  309853 main.go:141] libmachine: Using SSH client type: native
	I0717 22:17:41.798063  309853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0717 22:17:41.798098  309853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:17:42.006682  309853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:17:42.006721  309853 machine.go:91] provisioned docker machine in 901.863617ms
	I0717 22:17:42.006732  309853 client.go:171] LocalClient.Create took 7.192714467s
	I0717 22:17:42.006749  309853 start.go:167] duration metric: libmachine.API.Create for "multinode-265316" took 7.192779574s
	I0717 22:17:42.006756  309853 start.go:300] post-start starting for "multinode-265316-m02" (driver="docker")
	I0717 22:17:42.006768  309853 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:17:42.006842  309853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:17:42.006896  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:17:42.023664  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa Username:docker}
	I0717 22:17:42.115860  309853 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:17:42.118788  309853 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0717 22:17:42.118811  309853 command_runner.go:130] > NAME="Ubuntu"
	I0717 22:17:42.118817  309853 command_runner.go:130] > VERSION_ID="22.04"
	I0717 22:17:42.118822  309853 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0717 22:17:42.118827  309853 command_runner.go:130] > VERSION_CODENAME=jammy
	I0717 22:17:42.118831  309853 command_runner.go:130] > ID=ubuntu
	I0717 22:17:42.118835  309853 command_runner.go:130] > ID_LIKE=debian
	I0717 22:17:42.118839  309853 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0717 22:17:42.118843  309853 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0717 22:17:42.118849  309853 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0717 22:17:42.118856  309853 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0717 22:17:42.118863  309853 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0717 22:17:42.118921  309853 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:17:42.118942  309853 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:17:42.118959  309853 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:17:42.118971  309853 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 22:17:42.118985  309853 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/addons for local assets ...
	I0717 22:17:42.119045  309853 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/files for local assets ...
	I0717 22:17:42.119137  309853 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> 2256422.pem in /etc/ssl/certs
	I0717 22:17:42.119152  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> /etc/ssl/certs/2256422.pem
	I0717 22:17:42.119256  309853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:17:42.126730  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem --> /etc/ssl/certs/2256422.pem (1708 bytes)
	I0717 22:17:42.147592  309853 start.go:303] post-start completed in 140.821525ms
	I0717 22:17:42.147921  309853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265316-m02
	I0717 22:17:42.164379  309853 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/config.json ...
	I0717 22:17:42.164598  309853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:17:42.164640  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:17:42.179522  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa Username:docker}
	I0717 22:17:42.267854  309853 command_runner.go:130] > 21%!
	(MISSING)I0717 22:17:42.267917  309853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:17:42.271792  309853 command_runner.go:130] > 232G
	I0717 22:17:42.271822  309853 start.go:128] duration metric: createHost completed in 7.460030939s
	I0717 22:17:42.271831  309853 start.go:83] releasing machines lock for "multinode-265316-m02", held for 7.460175418s
	I0717 22:17:42.271898  309853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265316-m02
	I0717 22:17:42.291753  309853 out.go:177] * Found network options:
	I0717 22:17:42.293721  309853 out.go:177]   - NO_PROXY=192.168.58.2
	W0717 22:17:42.295177  309853 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 22:17:42.295231  309853 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 22:17:42.295314  309853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:17:42.295358  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:17:42.295443  309853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:17:42.295520  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:17:42.312278  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa Username:docker}
	I0717 22:17:42.312281  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa Username:docker}
	I0717 22:17:42.531325  309853 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 22:17:42.531442  309853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:17:42.535628  309853 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0717 22:17:42.535652  309853 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0717 22:17:42.535666  309853 command_runner.go:130] > Device: b0h/176d	Inode: 2846107     Links: 1
	I0717 22:17:42.535675  309853 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:17:42.535696  309853 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0717 22:17:42.535708  309853 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0717 22:17:42.535721  309853 command_runner.go:130] > Change: 2023-07-17 21:58:25.914595095 +0000
	I0717 22:17:42.535733  309853 command_runner.go:130] >  Birth: 2023-07-17 21:58:25.914595095 +0000
	I0717 22:17:42.535798  309853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:17:42.552881  309853 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:17:42.552962  309853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:17:42.578538  309853 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0717 22:17:42.578569  309853 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 22:17:42.578576  309853 start.go:466] detecting cgroup driver to use...
	I0717 22:17:42.578605  309853 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:17:42.578650  309853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:17:42.592107  309853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:17:42.601996  309853 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:17:42.602041  309853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:17:42.613712  309853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:17:42.625615  309853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 22:17:42.701563  309853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:17:42.714374  309853 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 22:17:42.781977  309853 docker.go:212] disabling docker service ...
	I0717 22:17:42.782041  309853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:17:42.799134  309853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:17:42.809305  309853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:17:42.882168  309853 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 22:17:42.882231  309853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:17:42.964863  309853 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 22:17:42.964948  309853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:17:42.974974  309853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:17:42.988640  309853 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 22:17:42.989434  309853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 22:17:42.989483  309853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:42.998341  309853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 22:17:42.998401  309853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:43.007153  309853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:43.015584  309853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:17:43.023809  309853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 22:17:43.031595  309853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 22:17:43.038137  309853 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 22:17:43.038731  309853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 22:17:43.045876  309853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 22:17:43.120067  309853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 22:17:43.224053  309853 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 22:17:43.224130  309853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 22:17:43.227486  309853 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 22:17:43.227511  309853 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 22:17:43.227521  309853 command_runner.go:130] > Device: bah/186d	Inode: 186         Links: 1
	I0717 22:17:43.227533  309853 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:17:43.227544  309853 command_runner.go:130] > Access: 2023-07-17 22:17:43.210406272 +0000
	I0717 22:17:43.227553  309853 command_runner.go:130] > Modify: 2023-07-17 22:17:43.210406272 +0000
	I0717 22:17:43.227559  309853 command_runner.go:130] > Change: 2023-07-17 22:17:43.210406272 +0000
	I0717 22:17:43.227563  309853 command_runner.go:130] >  Birth: -
	I0717 22:17:43.227613  309853 start.go:534] Will wait 60s for crictl version
	I0717 22:17:43.227657  309853 ssh_runner.go:195] Run: which crictl
	I0717 22:17:43.230514  309853 command_runner.go:130] > /usr/bin/crictl
	I0717 22:17:43.230634  309853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 22:17:43.261174  309853 command_runner.go:130] > Version:  0.1.0
	I0717 22:17:43.261202  309853 command_runner.go:130] > RuntimeName:  cri-o
	I0717 22:17:43.261214  309853 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0717 22:17:43.261223  309853 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 22:17:43.263078  309853 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 22:17:43.263222  309853 ssh_runner.go:195] Run: crio --version
	I0717 22:17:43.298679  309853 command_runner.go:130] > crio version 1.24.6
	I0717 22:17:43.298706  309853 command_runner.go:130] > Version:          1.24.6
	I0717 22:17:43.298713  309853 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 22:17:43.298718  309853 command_runner.go:130] > GitTreeState:     clean
	I0717 22:17:43.298728  309853 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 22:17:43.298735  309853 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 22:17:43.298739  309853 command_runner.go:130] > Compiler:         gc
	I0717 22:17:43.298743  309853 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:17:43.298749  309853 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:17:43.298766  309853 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:17:43.298773  309853 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:17:43.298777  309853 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:17:43.298848  309853 ssh_runner.go:195] Run: crio --version
	I0717 22:17:43.332139  309853 command_runner.go:130] > crio version 1.24.6
	I0717 22:17:43.332161  309853 command_runner.go:130] > Version:          1.24.6
	I0717 22:17:43.332167  309853 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 22:17:43.332180  309853 command_runner.go:130] > GitTreeState:     clean
	I0717 22:17:43.332187  309853 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 22:17:43.332191  309853 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 22:17:43.332195  309853 command_runner.go:130] > Compiler:         gc
	I0717 22:17:43.332199  309853 command_runner.go:130] > Platform:         linux/amd64
	I0717 22:17:43.332204  309853 command_runner.go:130] > Linkmode:         dynamic
	I0717 22:17:43.332212  309853 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 22:17:43.332216  309853 command_runner.go:130] > SeccompEnabled:   true
	I0717 22:17:43.332220  309853 command_runner.go:130] > AppArmorEnabled:  false
	I0717 22:17:43.334085  309853 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 22:17:43.335393  309853 out.go:177]   - env NO_PROXY=192.168.58.2
	I0717 22:17:43.336635  309853 cli_runner.go:164] Run: docker network inspect multinode-265316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 22:17:43.354178  309853 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0717 22:17:43.357806  309853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:17:43.367744  309853 certs.go:56] Setting up /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316 for IP: 192.168.58.3
	I0717 22:17:43.367786  309853 certs.go:190] acquiring lock for shared ca certs: {Name:mk5feafb57b96958f78245f8503644226fe57af0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 22:17:43.367952  309853 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key
	I0717 22:17:43.368003  309853 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key
	I0717 22:17:43.368020  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 22:17:43.368041  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 22:17:43.368056  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 22:17:43.368072  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 22:17:43.368140  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem (1338 bytes)
	W0717 22:17:43.368210  309853 certs.go:433] ignoring /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642_empty.pem, impossibly tiny 0 bytes
	I0717 22:17:43.368226  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 22:17:43.368259  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem (1078 bytes)
	I0717 22:17:43.368295  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem (1123 bytes)
	I0717 22:17:43.368326  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem (1679 bytes)
	I0717 22:17:43.368378  309853 certs.go:437] found cert: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem (1708 bytes)
	I0717 22:17:43.368418  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem -> /usr/share/ca-certificates/225642.pem
	I0717 22:17:43.368437  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> /usr/share/ca-certificates/2256422.pem
	I0717 22:17:43.368454  309853 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:43.368792  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 22:17:43.389701  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 22:17:43.410543  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 22:17:43.432144  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 22:17:43.454480  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/225642.pem --> /usr/share/ca-certificates/225642.pem (1338 bytes)
	I0717 22:17:43.476474  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem --> /usr/share/ca-certificates/2256422.pem (1708 bytes)
	I0717 22:17:43.498101  309853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 22:17:43.519973  309853 ssh_runner.go:195] Run: openssl version
	I0717 22:17:43.524941  309853 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0717 22:17:43.525017  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2256422.pem && ln -fs /usr/share/ca-certificates/2256422.pem /etc/ssl/certs/2256422.pem"
	I0717 22:17:43.534083  309853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2256422.pem
	I0717 22:17:43.537474  309853 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 22:04 /usr/share/ca-certificates/2256422.pem
	I0717 22:17:43.537506  309853 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:04 /usr/share/ca-certificates/2256422.pem
	I0717 22:17:43.537549  309853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2256422.pem
	I0717 22:17:43.543848  309853 command_runner.go:130] > 3ec20f2e
	I0717 22:17:43.543968  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2256422.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 22:17:43.552849  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 22:17:43.561623  309853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:43.564819  309853 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 21:58 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:43.564869  309853 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 21:58 /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:43.564908  309853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 22:17:43.570801  309853 command_runner.go:130] > b5213941
	I0717 22:17:43.570991  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 22:17:43.579882  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/225642.pem && ln -fs /usr/share/ca-certificates/225642.pem /etc/ssl/certs/225642.pem"
	I0717 22:17:43.589519  309853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/225642.pem
	I0717 22:17:43.592902  309853 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 22:04 /usr/share/ca-certificates/225642.pem
	I0717 22:17:43.592928  309853 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:04 /usr/share/ca-certificates/225642.pem
	I0717 22:17:43.592960  309853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/225642.pem
	I0717 22:17:43.599237  309853 command_runner.go:130] > 51391683
	I0717 22:17:43.599306  309853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/225642.pem /etc/ssl/certs/51391683.0"
	I0717 22:17:43.608256  309853 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 22:17:43.611151  309853 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:17:43.611224  309853 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 22:17:43.611314  309853 ssh_runner.go:195] Run: crio config
	I0717 22:17:43.648059  309853 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 22:17:43.648087  309853 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 22:17:43.648101  309853 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 22:17:43.648106  309853 command_runner.go:130] > #
	I0717 22:17:43.648118  309853 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 22:17:43.648131  309853 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 22:17:43.648147  309853 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 22:17:43.648159  309853 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 22:17:43.648165  309853 command_runner.go:130] > # reload'.
	I0717 22:17:43.648175  309853 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 22:17:43.648186  309853 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 22:17:43.648199  309853 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 22:17:43.648212  309853 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 22:17:43.648219  309853 command_runner.go:130] > [crio]
	I0717 22:17:43.648260  309853 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 22:17:43.648268  309853 command_runner.go:130] > # containers images, in this directory.
	I0717 22:17:43.648284  309853 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0717 22:17:43.648295  309853 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 22:17:43.648308  309853 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0717 22:17:43.648326  309853 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 22:17:43.648333  309853 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 22:17:43.648339  309853 command_runner.go:130] > # storage_driver = "vfs"
	I0717 22:17:43.648348  309853 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 22:17:43.648359  309853 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 22:17:43.648365  309853 command_runner.go:130] > # storage_option = [
	I0717 22:17:43.648371  309853 command_runner.go:130] > # ]
	I0717 22:17:43.648381  309853 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 22:17:43.648391  309853 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 22:17:43.648398  309853 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 22:17:43.648404  309853 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 22:17:43.648409  309853 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 22:17:43.648414  309853 command_runner.go:130] > # always happen on a node reboot
	I0717 22:17:43.648418  309853 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 22:17:43.648425  309853 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 22:17:43.648438  309853 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 22:17:43.648459  309853 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 22:17:43.648475  309853 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 22:17:43.648490  309853 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 22:17:43.648510  309853 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 22:17:43.648517  309853 command_runner.go:130] > # internal_wipe = true
	I0717 22:17:43.648529  309853 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 22:17:43.648544  309853 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 22:17:43.648557  309853 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 22:17:43.648576  309853 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 22:17:43.648586  309853 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 22:17:43.648598  309853 command_runner.go:130] > [crio.api]
	I0717 22:17:43.648607  309853 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 22:17:43.648615  309853 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 22:17:43.648627  309853 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 22:17:43.648637  309853 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 22:17:43.648647  309853 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 22:17:43.648659  309853 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 22:17:43.648671  309853 command_runner.go:130] > # stream_port = "0"
	I0717 22:17:43.648680  309853 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 22:17:43.648688  309853 command_runner.go:130] > # stream_enable_tls = false
	I0717 22:17:43.648701  309853 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 22:17:43.648712  309853 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 22:17:43.648722  309853 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 22:17:43.648733  309853 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 22:17:43.648740  309853 command_runner.go:130] > # minutes.
	I0717 22:17:43.648750  309853 command_runner.go:130] > # stream_tls_cert = ""
	I0717 22:17:43.648763  309853 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 22:17:43.648774  309853 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 22:17:43.648780  309853 command_runner.go:130] > # stream_tls_key = ""
	I0717 22:17:43.648788  309853 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 22:17:43.648801  309853 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 22:17:43.648809  309853 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 22:17:43.648815  309853 command_runner.go:130] > # stream_tls_ca = ""
	I0717 22:17:43.648826  309853 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:17:43.648833  309853 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0717 22:17:43.648843  309853 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 22:17:43.648850  309853 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0717 22:17:43.648913  309853 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 22:17:43.648924  309853 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 22:17:43.648930  309853 command_runner.go:130] > [crio.runtime]
	I0717 22:17:43.648940  309853 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 22:17:43.648949  309853 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 22:17:43.648955  309853 command_runner.go:130] > # "nofile=1024:2048"
	I0717 22:17:43.648964  309853 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 22:17:43.648971  309853 command_runner.go:130] > # default_ulimits = [
	I0717 22:17:43.648976  309853 command_runner.go:130] > # ]
	I0717 22:17:43.648986  309853 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 22:17:43.648992  309853 command_runner.go:130] > # no_pivot = false
	I0717 22:17:43.649002  309853 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 22:17:43.649012  309853 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 22:17:43.649021  309853 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 22:17:43.649030  309853 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 22:17:43.649038  309853 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 22:17:43.649050  309853 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:17:43.649054  309853 command_runner.go:130] > # conmon = ""
	I0717 22:17:43.649059  309853 command_runner.go:130] > # Cgroup setting for conmon
	I0717 22:17:43.649065  309853 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 22:17:43.649069  309853 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 22:17:43.649075  309853 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 22:17:43.649080  309853 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 22:17:43.649086  309853 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 22:17:43.649089  309853 command_runner.go:130] > # conmon_env = [
	I0717 22:17:43.649093  309853 command_runner.go:130] > # ]
	I0717 22:17:43.649098  309853 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 22:17:43.649102  309853 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 22:17:43.649108  309853 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 22:17:43.649112  309853 command_runner.go:130] > # default_env = [
	I0717 22:17:43.649115  309853 command_runner.go:130] > # ]
	I0717 22:17:43.649124  309853 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 22:17:43.649127  309853 command_runner.go:130] > # selinux = false
	I0717 22:17:43.649133  309853 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 22:17:43.649139  309853 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 22:17:43.649145  309853 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 22:17:43.649148  309853 command_runner.go:130] > # seccomp_profile = ""
	I0717 22:17:43.649154  309853 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 22:17:43.649159  309853 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 22:17:43.649166  309853 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 22:17:43.649173  309853 command_runner.go:130] > # which might increase security.
	I0717 22:17:43.649180  309853 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0717 22:17:43.649189  309853 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 22:17:43.649200  309853 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 22:17:43.649212  309853 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 22:17:43.649226  309853 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 22:17:43.649233  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:43.649239  309853 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 22:17:43.649246  309853 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 22:17:43.649253  309853 command_runner.go:130] > # the cgroup blockio controller.
	I0717 22:17:43.649258  309853 command_runner.go:130] > # blockio_config_file = ""
	I0717 22:17:43.649266  309853 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 22:17:43.649272  309853 command_runner.go:130] > # irqbalance daemon.
	I0717 22:17:43.649279  309853 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 22:17:43.649287  309853 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 22:17:43.649293  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:43.649298  309853 command_runner.go:130] > # rdt_config_file = ""
	I0717 22:17:43.649305  309853 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 22:17:43.649310  309853 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 22:17:43.649317  309853 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 22:17:43.649324  309853 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 22:17:43.649333  309853 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 22:17:43.649341  309853 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 22:17:43.649348  309853 command_runner.go:130] > # will be added.
	I0717 22:17:43.649355  309853 command_runner.go:130] > # default_capabilities = [
	I0717 22:17:43.649361  309853 command_runner.go:130] > # 	"CHOWN",
	I0717 22:17:43.649367  309853 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 22:17:43.649373  309853 command_runner.go:130] > # 	"FSETID",
	I0717 22:17:43.649382  309853 command_runner.go:130] > # 	"FOWNER",
	I0717 22:17:43.649388  309853 command_runner.go:130] > # 	"SETGID",
	I0717 22:17:43.649407  309853 command_runner.go:130] > # 	"SETUID",
	I0717 22:17:43.649413  309853 command_runner.go:130] > # 	"SETPCAP",
	I0717 22:17:43.649420  309853 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 22:17:43.649426  309853 command_runner.go:130] > # 	"KILL",
	I0717 22:17:43.649432  309853 command_runner.go:130] > # ]
	I0717 22:17:43.649443  309853 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 22:17:43.649456  309853 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 22:17:43.649468  309853 command_runner.go:130] > # add_inheritable_capabilities = true
	I0717 22:17:43.649482  309853 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 22:17:43.649495  309853 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:17:43.649505  309853 command_runner.go:130] > # default_sysctls = [
	I0717 22:17:43.649511  309853 command_runner.go:130] > # ]
	I0717 22:17:43.649519  309853 command_runner.go:130] > # List of devices on the host that a
	I0717 22:17:43.649531  309853 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 22:17:43.649538  309853 command_runner.go:130] > # allowed_devices = [
	I0717 22:17:43.649545  309853 command_runner.go:130] > # 	"/dev/fuse",
	I0717 22:17:43.649553  309853 command_runner.go:130] > # ]
	I0717 22:17:43.649563  309853 command_runner.go:130] > # List of additional devices. specified as
	I0717 22:17:43.649624  309853 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 22:17:43.649636  309853 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 22:17:43.649649  309853 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 22:17:43.649656  309853 command_runner.go:130] > # additional_devices = [
	I0717 22:17:43.649665  309853 command_runner.go:130] > # ]
	I0717 22:17:43.649674  309853 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 22:17:43.649685  309853 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 22:17:43.649691  309853 command_runner.go:130] > # 	"/etc/cdi",
	I0717 22:17:43.649701  309853 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 22:17:43.649706  309853 command_runner.go:130] > # ]
	I0717 22:17:43.649718  309853 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 22:17:43.649728  309853 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 22:17:43.649735  309853 command_runner.go:130] > # Defaults to false.
	I0717 22:17:43.649747  309853 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 22:17:43.649758  309853 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 22:17:43.649771  309853 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 22:17:43.649781  309853 command_runner.go:130] > # hooks_dir = [
	I0717 22:17:43.649789  309853 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 22:17:43.649797  309853 command_runner.go:130] > # ]
	I0717 22:17:43.649808  309853 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 22:17:43.649818  309853 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 22:17:43.649827  309853 command_runner.go:130] > # its default mounts from the following two files:
	I0717 22:17:43.649836  309853 command_runner.go:130] > #
	I0717 22:17:43.649847  309853 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 22:17:43.649862  309853 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 22:17:43.649874  309853 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 22:17:43.649882  309853 command_runner.go:130] > #
	I0717 22:17:43.649893  309853 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 22:17:43.649903  309853 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 22:17:43.649916  309853 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 22:17:43.649928  309853 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 22:17:43.649934  309853 command_runner.go:130] > #
	I0717 22:17:43.649945  309853 command_runner.go:130] > # default_mounts_file = ""
	I0717 22:17:43.649959  309853 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 22:17:43.649973  309853 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 22:17:43.649982  309853 command_runner.go:130] > # pids_limit = 0
	I0717 22:17:43.649994  309853 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 22:17:43.650003  309853 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 22:17:43.650013  309853 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 22:17:43.650029  309853 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 22:17:43.650039  309853 command_runner.go:130] > # log_size_max = -1
	I0717 22:17:43.650051  309853 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 22:17:43.650062  309853 command_runner.go:130] > # log_to_journald = false
	I0717 22:17:43.650075  309853 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 22:17:43.650085  309853 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 22:17:43.650093  309853 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 22:17:43.650101  309853 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 22:17:43.650114  309853 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 22:17:43.650125  309853 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 22:17:43.650134  309853 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 22:17:43.650144  309853 command_runner.go:130] > # read_only = false
	I0717 22:17:43.650159  309853 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 22:17:43.650171  309853 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 22:17:43.650177  309853 command_runner.go:130] > # live configuration reload.
	I0717 22:17:43.650185  309853 command_runner.go:130] > # log_level = "info"
	I0717 22:17:43.650194  309853 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 22:17:43.650206  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:43.650216  309853 command_runner.go:130] > # log_filter = ""
	I0717 22:17:43.650232  309853 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 22:17:43.650245  309853 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 22:17:43.650255  309853 command_runner.go:130] > # separated by comma.
	I0717 22:17:43.650264  309853 command_runner.go:130] > # uid_mappings = ""
	I0717 22:17:43.650272  309853 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 22:17:43.650283  309853 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 22:17:43.650293  309853 command_runner.go:130] > # separated by comma.
	I0717 22:17:43.650303  309853 command_runner.go:130] > # gid_mappings = ""
	I0717 22:17:43.650316  309853 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 22:17:43.650330  309853 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:17:43.650343  309853 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:17:43.650353  309853 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 22:17:43.650362  309853 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 22:17:43.650372  309853 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 22:17:43.650388  309853 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 22:17:43.650399  309853 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 22:17:43.650410  309853 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 22:17:43.650423  309853 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 22:17:43.650436  309853 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 22:17:43.650446  309853 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 22:17:43.650453  309853 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 22:17:43.650486  309853 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 22:17:43.650499  309853 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 22:17:43.650507  309853 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 22:17:43.650517  309853 command_runner.go:130] > # drop_infra_ctr = true
	I0717 22:17:43.650531  309853 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 22:17:43.650544  309853 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 22:17:43.650559  309853 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 22:17:43.650570  309853 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 22:17:43.650577  309853 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 22:17:43.650589  309853 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 22:17:43.650600  309853 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 22:17:43.650614  309853 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 22:17:43.650624  309853 command_runner.go:130] > # pinns_path = ""
	I0717 22:17:43.650635  309853 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 22:17:43.650648  309853 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 22:17:43.650662  309853 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 22:17:43.650670  309853 command_runner.go:130] > # default_runtime = "runc"
	I0717 22:17:43.650676  309853 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 22:17:43.650691  309853 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 22:17:43.650710  309853 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 22:17:43.650722  309853 command_runner.go:130] > # creation as a file is not desired either.
	I0717 22:17:43.650739  309853 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 22:17:43.650750  309853 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 22:17:43.650761  309853 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 22:17:43.650769  309853 command_runner.go:130] > # ]
	I0717 22:17:43.650775  309853 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 22:17:43.650788  309853 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 22:17:43.650803  309853 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 22:17:43.650818  309853 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 22:17:43.650829  309853 command_runner.go:130] > #
	I0717 22:17:43.650840  309853 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 22:17:43.650852  309853 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 22:17:43.650861  309853 command_runner.go:130] > #  runtime_type = "oci"
	I0717 22:17:43.650870  309853 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 22:17:43.650875  309853 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 22:17:43.650886  309853 command_runner.go:130] > #  allowed_annotations = []
	I0717 22:17:43.650896  309853 command_runner.go:130] > # Where:
	I0717 22:17:43.650906  309853 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 22:17:43.650920  309853 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 22:17:43.650933  309853 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 22:17:43.650949  309853 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 22:17:43.650958  309853 command_runner.go:130] > #   in $PATH.
	I0717 22:17:43.650967  309853 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 22:17:43.650974  309853 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 22:17:43.650984  309853 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 22:17:43.650994  309853 command_runner.go:130] > #   state.
	I0717 22:17:43.651006  309853 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 22:17:43.651019  309853 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 22:17:43.651033  309853 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 22:17:43.651045  309853 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 22:17:43.651058  309853 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 22:17:43.651068  309853 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 22:17:43.651077  309853 command_runner.go:130] > #   The currently recognized values are:
	I0717 22:17:43.651092  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 22:17:43.651107  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 22:17:43.651121  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 22:17:43.651134  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 22:17:43.651148  309853 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 22:17:43.651157  309853 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 22:17:43.651166  309853 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 22:17:43.651181  309853 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 22:17:43.651193  309853 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 22:17:43.651204  309853 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 22:17:43.651212  309853 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0717 22:17:43.651226  309853 command_runner.go:130] > runtime_type = "oci"
	I0717 22:17:43.651238  309853 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 22:17:43.651246  309853 command_runner.go:130] > runtime_config_path = ""
	I0717 22:17:43.651251  309853 command_runner.go:130] > monitor_path = ""
	I0717 22:17:43.651260  309853 command_runner.go:130] > monitor_cgroup = ""
	I0717 22:17:43.651268  309853 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 22:17:43.651359  309853 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 22:17:43.651377  309853 command_runner.go:130] > # running containers
	I0717 22:17:43.651385  309853 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 22:17:43.651396  309853 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 22:17:43.651439  309853 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 22:17:43.651452  309853 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 22:17:43.651464  309853 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 22:17:43.651475  309853 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 22:17:43.651489  309853 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 22:17:43.651497  309853 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 22:17:43.651509  309853 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 22:17:43.651519  309853 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 22:17:43.651533  309853 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 22:17:43.651545  309853 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 22:17:43.651557  309853 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 22:17:43.651570  309853 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 22:17:43.651587  309853 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 22:17:43.651600  309853 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 22:17:43.651618  309853 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 22:17:43.651635  309853 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 22:17:43.651646  309853 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 22:17:43.651659  309853 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 22:17:43.651669  309853 command_runner.go:130] > # Example:
	I0717 22:17:43.651678  309853 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 22:17:43.651690  309853 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 22:17:43.651701  309853 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 22:17:43.651713  309853 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 22:17:43.651722  309853 command_runner.go:130] > # cpuset = 0
	I0717 22:17:43.651729  309853 command_runner.go:130] > # cpushares = "0-1"
	I0717 22:17:43.651737  309853 command_runner.go:130] > # Where:
	I0717 22:17:43.651742  309853 command_runner.go:130] > # The workload name is workload-type.
	I0717 22:17:43.651756  309853 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 22:17:43.651769  309853 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 22:17:43.651786  309853 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 22:17:43.651802  309853 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 22:17:43.651815  309853 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 22:17:43.651823  309853 command_runner.go:130] > # 
	I0717 22:17:43.651833  309853 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 22:17:43.651839  309853 command_runner.go:130] > #
	I0717 22:17:43.651848  309853 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 22:17:43.651862  309853 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 22:17:43.651876  309853 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 22:17:43.651891  309853 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 22:17:43.651904  309853 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 22:17:43.651913  309853 command_runner.go:130] > [crio.image]
	I0717 22:17:43.651923  309853 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 22:17:43.651933  309853 command_runner.go:130] > # default_transport = "docker://"
	I0717 22:17:43.651944  309853 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 22:17:43.651955  309853 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:17:43.651966  309853 command_runner.go:130] > # global_auth_file = ""
	I0717 22:17:43.651979  309853 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 22:17:43.651990  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:43.652002  309853 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 22:17:43.652015  309853 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 22:17:43.652026  309853 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 22:17:43.652034  309853 command_runner.go:130] > # This option supports live configuration reload.
	I0717 22:17:43.652051  309853 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 22:17:43.652065  309853 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 22:17:43.652076  309853 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 22:17:43.652089  309853 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 22:17:43.652102  309853 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 22:17:43.652112  309853 command_runner.go:130] > # pause_command = "/pause"
	I0717 22:17:43.652126  309853 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 22:17:43.652140  309853 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 22:17:43.652153  309853 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 22:17:43.652167  309853 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 22:17:43.652185  309853 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 22:17:43.652193  309853 command_runner.go:130] > # signature_policy = ""
	I0717 22:17:43.652214  309853 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 22:17:43.652230  309853 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 22:17:43.652237  309853 command_runner.go:130] > # changing them here.
	I0717 22:17:43.652241  309853 command_runner.go:130] > # insecure_registries = [
	I0717 22:17:43.652247  309853 command_runner.go:130] > # ]
	I0717 22:17:43.652253  309853 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 22:17:43.652261  309853 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 22:17:43.652266  309853 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 22:17:43.652271  309853 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 22:17:43.652277  309853 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 22:17:43.652283  309853 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 22:17:43.652290  309853 command_runner.go:130] > # CNI plugins.
	I0717 22:17:43.652293  309853 command_runner.go:130] > [crio.network]
	I0717 22:17:43.652301  309853 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 22:17:43.652306  309853 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 22:17:43.652313  309853 command_runner.go:130] > # cni_default_network = ""
	I0717 22:17:43.652319  309853 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 22:17:43.652326  309853 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 22:17:43.652331  309853 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 22:17:43.652337  309853 command_runner.go:130] > # plugin_dirs = [
	I0717 22:17:43.652341  309853 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 22:17:43.652347  309853 command_runner.go:130] > # ]
	I0717 22:17:43.652353  309853 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 22:17:43.652358  309853 command_runner.go:130] > [crio.metrics]
	I0717 22:17:43.652363  309853 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 22:17:43.652370  309853 command_runner.go:130] > # enable_metrics = false
	I0717 22:17:43.652374  309853 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 22:17:43.652381  309853 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 22:17:43.652387  309853 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 22:17:43.652395  309853 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 22:17:43.652403  309853 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 22:17:43.652409  309853 command_runner.go:130] > # metrics_collectors = [
	I0717 22:17:43.652413  309853 command_runner.go:130] > # 	"operations",
	I0717 22:17:43.652420  309853 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 22:17:43.652427  309853 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 22:17:43.652432  309853 command_runner.go:130] > # 	"operations_errors",
	I0717 22:17:43.652439  309853 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 22:17:43.652443  309853 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 22:17:43.652449  309853 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 22:17:43.652454  309853 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 22:17:43.652458  309853 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 22:17:43.652463  309853 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 22:17:43.652469  309853 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 22:17:43.652474  309853 command_runner.go:130] > # 	"containers_oom_total",
	I0717 22:17:43.652480  309853 command_runner.go:130] > # 	"containers_oom",
	I0717 22:17:43.652484  309853 command_runner.go:130] > # 	"processes_defunct",
	I0717 22:17:43.652491  309853 command_runner.go:130] > # 	"operations_total",
	I0717 22:17:43.652495  309853 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 22:17:43.652502  309853 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 22:17:43.652507  309853 command_runner.go:130] > # 	"operations_errors_total",
	I0717 22:17:43.652513  309853 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 22:17:43.652517  309853 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 22:17:43.652524  309853 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 22:17:43.652529  309853 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 22:17:43.652535  309853 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 22:17:43.652540  309853 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 22:17:43.652546  309853 command_runner.go:130] > # ]
	I0717 22:17:43.652551  309853 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 22:17:43.652557  309853 command_runner.go:130] > # metrics_port = 9090
	I0717 22:17:43.652563  309853 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 22:17:43.652569  309853 command_runner.go:130] > # metrics_socket = ""
	I0717 22:17:43.652574  309853 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 22:17:43.652582  309853 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 22:17:43.652590  309853 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 22:17:43.652597  309853 command_runner.go:130] > # certificate on any modification event.
	I0717 22:17:43.652601  309853 command_runner.go:130] > # metrics_cert = ""
	I0717 22:17:43.652606  309853 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 22:17:43.652614  309853 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 22:17:43.652618  309853 command_runner.go:130] > # metrics_key = ""
	I0717 22:17:43.652626  309853 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 22:17:43.652630  309853 command_runner.go:130] > [crio.tracing]
	I0717 22:17:43.652639  309853 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 22:17:43.652645  309853 command_runner.go:130] > # enable_tracing = false
	I0717 22:17:43.652650  309853 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 22:17:43.652657  309853 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 22:17:43.652662  309853 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 22:17:43.652670  309853 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 22:17:43.652675  309853 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 22:17:43.652681  309853 command_runner.go:130] > [crio.stats]
	I0717 22:17:43.652687  309853 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 22:17:43.652695  309853 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 22:17:43.652701  309853 command_runner.go:130] > # stats_collection_period = 0
	I0717 22:17:43.652736  309853 command_runner.go:130] ! time="2023-07-17 22:17:43.645419320Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0717 22:17:43.652750  309853 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 22:17:43.652806  309853 cni.go:84] Creating CNI manager for ""
	I0717 22:17:43.652814  309853 cni.go:137] 2 nodes found, recommending kindnet
	I0717 22:17:43.652824  309853 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 22:17:43.652846  309853 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-265316 NodeName:multinode-265316-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 22:17:43.652958  309853 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-265316-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 22:17:43.653004  309853 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-265316-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-265316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 22:17:43.653051  309853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 22:17:43.661277  309853 command_runner.go:130] > kubeadm
	I0717 22:17:43.661297  309853 command_runner.go:130] > kubectl
	I0717 22:17:43.661301  309853 command_runner.go:130] > kubelet
	I0717 22:17:43.661322  309853 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 22:17:43.661386  309853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 22:17:43.669428  309853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 22:17:43.685408  309853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 22:17:43.701439  309853 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0717 22:17:43.704731  309853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 22:17:43.714478  309853 host.go:66] Checking if "multinode-265316" exists ...
	I0717 22:17:43.714695  309853 start.go:301] JoinCluster: &{Name:multinode-265316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-265316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:17:43.714786  309853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 22:17:43.714825  309853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:17:43.714852  309853 config.go:182] Loaded profile config "multinode-265316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:17:43.731214  309853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:17:43.873813  309853 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token bws4dd.syu7mscmh33uzuzs --discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 
	I0717 22:17:43.873881  309853 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:17:43.873924  309853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bws4dd.syu7mscmh33uzuzs --discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-265316-m02"
	I0717 22:17:43.907986  309853 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 22:17:43.935218  309853 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0717 22:17:43.935240  309853 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 22:17:43.935247  309853 command_runner.go:130] > OS: Linux
	I0717 22:17:43.935255  309853 command_runner.go:130] > CGROUPS_CPU: enabled
	I0717 22:17:43.935264  309853 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0717 22:17:43.935271  309853 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0717 22:17:43.935278  309853 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0717 22:17:43.935289  309853 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0717 22:17:43.935297  309853 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0717 22:17:43.935309  309853 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0717 22:17:43.935317  309853 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0717 22:17:43.935322  309853 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0717 22:17:44.011045  309853 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 22:17:44.011071  309853 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 22:17:44.035375  309853 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 22:17:44.035423  309853 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 22:17:44.035439  309853 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 22:17:44.095559  309853 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 22:17:46.108295  309853 command_runner.go:130] > This node has joined the cluster:
	I0717 22:17:46.108328  309853 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 22:17:46.108337  309853 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 22:17:46.108347  309853 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 22:17:46.111469  309853 command_runner.go:130] ! W0717 22:17:43.907532    1109 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 22:17:46.111507  309853 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 22:17:46.111524  309853 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 22:17:46.111556  309853 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bws4dd.syu7mscmh33uzuzs --discovery-token-ca-cert-hash sha256:bfc53725e6665ea0346f55c73390f7faa9cc8aa313e76f38236964b5079a2a27 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-265316-m02": (2.237610037s)
	I0717 22:17:46.111587  309853 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 22:17:46.269119  309853 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0717 22:17:46.269151  309853 start.go:303] JoinCluster complete in 2.55445511s
	I0717 22:17:46.269162  309853 cni.go:84] Creating CNI manager for ""
	I0717 22:17:46.269167  309853 cni.go:137] 2 nodes found, recommending kindnet
	I0717 22:17:46.269216  309853 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 22:17:46.272647  309853 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 22:17:46.272676  309853 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0717 22:17:46.272687  309853 command_runner.go:130] > Device: 37h/55d	Inode: 2850400     Links: 1
	I0717 22:17:46.272698  309853 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 22:17:46.272714  309853 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0717 22:17:46.272726  309853 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0717 22:17:46.272735  309853 command_runner.go:130] > Change: 2023-07-17 21:58:26.314622681 +0000
	I0717 22:17:46.272742  309853 command_runner.go:130] >  Birth: 2023-07-17 21:58:26.290621026 +0000
	I0717 22:17:46.272790  309853 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 22:17:46.272800  309853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 22:17:46.289632  309853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 22:17:46.557962  309853 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:17:46.557984  309853 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 22:17:46.557990  309853 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 22:17:46.557995  309853 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 22:17:46.558335  309853 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:17:46.558545  309853 kapi.go:59] client config for multinode-265316: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.key", CAFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:17:46.558836  309853 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 22:17:46.558847  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:46.558855  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:46.558861  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:46.560844  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:46.560865  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:46.560873  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:46.560879  309853 round_trippers.go:580]     Content-Length: 291
	I0717 22:17:46.560885  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:46 GMT
	I0717 22:17:46.560890  309853 round_trippers.go:580]     Audit-Id: 9df31dcd-2e2e-4d31-8750-c413c31d791c
	I0717 22:17:46.560900  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:46.560905  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:46.560911  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:46.560935  309853 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8cd30164-7039-470f-9b7c-62d4569467c0","resourceVersion":"439","creationTimestamp":"2023-07-17T22:17:14Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 22:17:46.561024  309853 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-265316" context rescaled to 1 replicas
	I0717 22:17:46.561052  309853 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 22:17:46.564753  309853 out.go:177] * Verifying Kubernetes components...
	I0717 22:17:46.566317  309853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:17:46.582057  309853 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:17:46.582306  309853 kapi.go:59] client config for multinode-265316: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.crt", KeyFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/profiles/multinode-265316/client.key", CAFile:"/home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 22:17:46.582554  309853 node_ready.go:35] waiting up to 6m0s for node "multinode-265316-m02" to be "Ready" ...
	I0717 22:17:46.582614  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316-m02
	I0717 22:17:46.582621  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:46.582629  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:46.582636  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:46.584598  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:46.584618  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:46.584628  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:46.584637  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:46.584645  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:46 GMT
	I0717 22:17:46.584654  309853 round_trippers.go:580]     Audit-Id: 97afafc7-d967-49bd-83b1-cb2e28992beb
	I0717 22:17:46.584663  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:46.584676  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:46.584838  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316-m02","uid":"e4026444-3460-4c63-b3b2-4fa5ecc2c413","resourceVersion":"473","creationTimestamp":"2023-07-17T22:17:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5101 chars]
	I0717 22:17:47.085885  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316-m02
	I0717 22:17:47.085905  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:47.085917  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:47.085923  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:47.088327  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:47.088347  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:47.088354  309853 round_trippers.go:580]     Audit-Id: b13ca2c5-2a2b-4fb2-8094-6bf6f106f57e
	I0717 22:17:47.088363  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:47.088371  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:47.088379  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:47.088387  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:47.088396  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:47 GMT
	I0717 22:17:47.088512  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316-m02","uid":"e4026444-3460-4c63-b3b2-4fa5ecc2c413","resourceVersion":"473","creationTimestamp":"2023-07-17T22:17:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5101 chars]
	I0717 22:17:47.585745  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316-m02
	I0717 22:17:47.585765  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:47.585777  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:47.585785  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:47.588245  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:47.588271  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:47.588282  309853 round_trippers.go:580]     Audit-Id: e9b18d57-1b30-419f-a2cd-dc52fac6f33b
	I0717 22:17:47.588292  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:47.588300  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:47.588308  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:47.588317  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:47.588328  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:47 GMT
	I0717 22:17:47.588448  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316-m02","uid":"e4026444-3460-4c63-b3b2-4fa5ecc2c413","resourceVersion":"473","creationTimestamp":"2023-07-17T22:17:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5101 chars]
	I0717 22:17:48.086130  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316-m02
	I0717 22:17:48.086159  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.086168  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.086174  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.088490  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:48.088516  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.088528  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.088537  309853 round_trippers.go:580]     Audit-Id: e226c04a-a35e-419c-ab81-d3da21e7d476
	I0717 22:17:48.088547  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.088558  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.088571  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.088584  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.088713  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316-m02","uid":"e4026444-3460-4c63-b3b2-4fa5ecc2c413","resourceVersion":"490","creationTimestamp":"2023-07-17T22:17:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5176 chars]
	I0717 22:17:48.089105  309853 node_ready.go:49] node "multinode-265316-m02" has status "Ready":"True"
	I0717 22:17:48.089124  309853 node_ready.go:38] duration metric: took 1.506554959s waiting for node "multinode-265316-m02" to be "Ready" ...
	I0717 22:17:48.089134  309853 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:17:48.089208  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 22:17:48.089218  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.089229  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.089239  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.092398  309853 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 22:17:48.092430  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.092442  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.092451  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.092459  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.092471  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.092481  309853 round_trippers.go:580]     Audit-Id: d3e5029e-dcb0-4392-85d6-f30f6770a03f
	I0717 22:17:48.092494  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.093019  309853 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"490"},"items":[{"metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"435","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0717 22:17:48.096083  309853 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-s4bbn" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.096173  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-s4bbn
	I0717 22:17:48.096184  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.096194  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.096205  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.098040  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:48.098059  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.098068  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.098077  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.098085  309853 round_trippers.go:580]     Audit-Id: 3b65152f-58a5-45f5-b78f-6edd17b17343
	I0717 22:17:48.098095  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.098111  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.098121  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.098212  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-s4bbn","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919","resourceVersion":"435","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"24ad0d8f-b80a-4d3d-9682-ee0317a403b9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24ad0d8f-b80a-4d3d-9682-ee0317a403b9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0717 22:17:48.098614  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:48.098630  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.098642  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.098652  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.100741  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:48.100764  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.100775  309853 round_trippers.go:580]     Audit-Id: 7fa90669-d9cd-4ed3-94e7-2c3d5ee55447
	I0717 22:17:48.100785  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.100797  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.100809  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.100819  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.100832  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.100974  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:48.101255  309853 pod_ready.go:92] pod "coredns-5d78c9869d-s4bbn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:48.101268  309853 pod_ready.go:81] duration metric: took 5.161981ms waiting for pod "coredns-5d78c9869d-s4bbn" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.101276  309853 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.101319  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-265316
	I0717 22:17:48.101326  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.101343  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.101351  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.103088  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:48.103108  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.103117  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.103126  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.103134  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.103148  309853 round_trippers.go:580]     Audit-Id: d3621200-d8f6-4df9-b23f-37bc6e443a68
	I0717 22:17:48.103157  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.103169  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.103240  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-265316","namespace":"kube-system","uid":"7ce134e3-d832-431a-acea-e9c06ceab0df","resourceVersion":"297","creationTimestamp":"2023-07-17T22:17:15Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"3a8059633c8e010933f854a50f12fcb2","kubernetes.io/config.mirror":"3a8059633c8e010933f854a50f12fcb2","kubernetes.io/config.seen":"2023-07-17T22:17:14.996579724Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0717 22:17:48.103582  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:48.103593  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.103600  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.103606  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.105228  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:48.105243  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.105249  309853 round_trippers.go:580]     Audit-Id: 4287b0e7-923f-4585-a929-15e1f21363d1
	I0717 22:17:48.105255  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.105260  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.105268  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.105274  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.105281  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.105383  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:48.105661  309853 pod_ready.go:92] pod "etcd-multinode-265316" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:48.105676  309853 pod_ready.go:81] duration metric: took 4.393407ms waiting for pod "etcd-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.105695  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.105749  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-265316
	I0717 22:17:48.105757  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.105769  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.105783  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.107491  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:48.107506  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.107513  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.107518  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.107523  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.107528  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.107534  309853 round_trippers.go:580]     Audit-Id: a8b138fd-00e4-4482-8e95-a159fa89b905
	I0717 22:17:48.107539  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.107651  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-265316","namespace":"kube-system","uid":"edf5311d-73b9-42b7-8847-62fa9c8eea08","resourceVersion":"290","creationTimestamp":"2023-07-17T22:17:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"402b82e1fb62f32056e02b21a5a14992","kubernetes.io/config.mirror":"402b82e1fb62f32056e02b21a5a14992","kubernetes.io/config.seen":"2023-07-17T22:17:14.996585862Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0717 22:17:48.108031  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:48.108044  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.108050  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.108056  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.109508  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:48.109528  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.109538  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.109548  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.109558  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.109570  309853 round_trippers.go:580]     Audit-Id: 7cc9f544-3358-4a2b-b225-e97cf377b67e
	I0717 22:17:48.109582  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.109595  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.109689  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:48.109983  309853 pod_ready.go:92] pod "kube-apiserver-multinode-265316" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:48.109996  309853 pod_ready.go:81] duration metric: took 4.288547ms waiting for pod "kube-apiserver-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.110005  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.110049  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-265316
	I0717 22:17:48.110055  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.110062  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.110071  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.111664  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:48.111678  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.111684  309853 round_trippers.go:580]     Audit-Id: 3be91d03-3b11-4883-be9e-35a2d8c114f2
	I0717 22:17:48.111690  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.111695  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.111701  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.111709  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.111715  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.111893  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-265316","namespace":"kube-system","uid":"9f8c70c0-fa65-45e8-8531-94ce623ede94","resourceVersion":"294","creationTimestamp":"2023-07-17T22:17:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a41106563a620a43d8654372c53a786e","kubernetes.io/config.mirror":"a41106563a620a43d8654372c53a786e","kubernetes.io/config.seen":"2023-07-17T22:17:14.996587459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0717 22:17:48.112260  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:48.112271  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.112286  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.112295  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.114011  309853 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 22:17:48.114025  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.114032  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.114038  309853 round_trippers.go:580]     Audit-Id: 53c45755-2775-4469-8655-cf7ceec8eb4e
	I0717 22:17:48.114043  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.114048  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.114056  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.114061  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.114163  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:48.114433  309853 pod_ready.go:92] pod "kube-controller-manager-multinode-265316" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:48.114447  309853 pod_ready.go:81] duration metric: took 4.433398ms waiting for pod "kube-controller-manager-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.114454  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4cxgd" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.286855  309853 request.go:628] Waited for 172.338724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cxgd
	I0717 22:17:48.286947  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4cxgd
	I0717 22:17:48.286958  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.286971  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.286985  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.289081  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:48.289105  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.289115  309853 round_trippers.go:580]     Audit-Id: a00a644c-c234-4e22-a648-7e8cd33d2422
	I0717 22:17:48.289122  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.289131  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.289139  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.289148  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.289160  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.289294  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4cxgd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1297fe2e-d86e-4494-a6ac-e8b95b9ef84a","resourceVersion":"408","creationTimestamp":"2023-07-17T22:17:29Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ef51c4db-c46d-498c-ae8b-747d67715984","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef51c4db-c46d-498c-ae8b-747d67715984\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0717 22:17:48.487155  309853 request.go:628] Waited for 197.374698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:48.487212  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:48.487216  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.487224  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.487230  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.489629  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:48.489651  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.489659  309853 round_trippers.go:580]     Audit-Id: c8a52a86-3d54-47b6-8531-e7b52dc8a198
	I0717 22:17:48.489664  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.489669  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.489674  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.489680  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.489685  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.489901  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:48.490280  309853 pod_ready.go:92] pod "kube-proxy-4cxgd" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:48.490296  309853 pod_ready.go:81] duration metric: took 375.835808ms waiting for pod "kube-proxy-4cxgd" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.490313  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mgwvn" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.686799  309853 request.go:628] Waited for 196.392003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgwvn
	I0717 22:17:48.686874  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mgwvn
	I0717 22:17:48.686884  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.686905  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.686920  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.689494  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:48.689523  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.689535  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.689543  309853 round_trippers.go:580]     Audit-Id: 9aa962ec-1d4b-4c04-97bc-cac63c52a417
	I0717 22:17:48.689551  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.689560  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.689569  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.689578  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.689681  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mgwvn","generateName":"kube-proxy-","namespace":"kube-system","uid":"70288051-c1b2-4e8d-919d-bfaeacd9f09e","resourceVersion":"486","creationTimestamp":"2023-07-17T22:17:45Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"ef51c4db-c46d-498c-ae8b-747d67715984","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef51c4db-c46d-498c-ae8b-747d67715984\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 22:17:48.886968  309853 request.go:628] Waited for 196.828187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265316-m02
	I0717 22:17:48.887033  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316-m02
	I0717 22:17:48.887039  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:48.887057  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:48.887071  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:48.889527  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:48.889557  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:48.889568  309853 round_trippers.go:580]     Audit-Id: f4ae40c7-677d-4a84-b0a6-02f95811d32b
	I0717 22:17:48.889577  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:48.889585  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:48.889593  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:48.889601  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:48.889609  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:48 GMT
	I0717 22:17:48.889744  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316-m02","uid":"e4026444-3460-4c63-b3b2-4fa5ecc2c413","resourceVersion":"490","creationTimestamp":"2023-07-17T22:17:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:45Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5176 chars]
	I0717 22:17:48.890088  309853 pod_ready.go:92] pod "kube-proxy-mgwvn" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:48.890104  309853 pod_ready.go:81] duration metric: took 399.780657ms waiting for pod "kube-proxy-mgwvn" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:48.890115  309853 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:49.086560  309853 request.go:628] Waited for 196.344031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-265316
	I0717 22:17:49.086633  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-265316
	I0717 22:17:49.086641  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:49.086656  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:49.086671  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:49.089188  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:49.089215  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:49.089225  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:49.089232  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:49 GMT
	I0717 22:17:49.089238  309853 round_trippers.go:580]     Audit-Id: 94d5cd16-ca56-4798-a663-46e2644d42da
	I0717 22:17:49.089243  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:49.089251  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:49.089263  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:49.089364  309853 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-265316","namespace":"kube-system","uid":"b0e68345-3086-4b83-ab1c-d654d72eba7e","resourceVersion":"293","creationTimestamp":"2023-07-17T22:17:15Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c9df8caab23fcf6726bdc907a7b4503e","kubernetes.io/config.mirror":"c9df8caab23fcf6726bdc907a7b4503e","kubernetes.io/config.seen":"2023-07-17T22:17:14.996589459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T22:17:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0717 22:17:49.287198  309853 request.go:628] Waited for 197.408593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:49.287281  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-265316
	I0717 22:17:49.287289  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:49.287309  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:49.287324  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:49.289686  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:49.289718  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:49.289730  309853 round_trippers.go:580]     Audit-Id: 37f501df-23a5-4417-ba65-812644a728b7
	I0717 22:17:49.289738  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:49.289748  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:49.289756  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:49.289765  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:49.289774  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:49 GMT
	I0717 22:17:49.289903  309853 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T22:17:12Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 22:17:49.290339  309853 pod_ready.go:92] pod "kube-scheduler-multinode-265316" in "kube-system" namespace has status "Ready":"True"
	I0717 22:17:49.290362  309853 pod_ready.go:81] duration metric: took 400.235743ms waiting for pod "kube-scheduler-multinode-265316" in "kube-system" namespace to be "Ready" ...
	I0717 22:17:49.290376  309853 pod_ready.go:38] duration metric: took 1.20122751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 22:17:49.290395  309853 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 22:17:49.290458  309853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:17:49.301672  309853 system_svc.go:56] duration metric: took 11.265135ms WaitForService to wait for kubelet.
	I0717 22:17:49.301704  309853 kubeadm.go:581] duration metric: took 2.740625464s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 22:17:49.301732  309853 node_conditions.go:102] verifying NodePressure condition ...
	I0717 22:17:49.487216  309853 request.go:628] Waited for 185.393012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0717 22:17:49.487279  309853 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0717 22:17:49.487284  309853 round_trippers.go:469] Request Headers:
	I0717 22:17:49.487292  309853 round_trippers.go:473]     Accept: application/json, */*
	I0717 22:17:49.487299  309853 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 22:17:49.489979  309853 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 22:17:49.490011  309853 round_trippers.go:577] Response Headers:
	I0717 22:17:49.490025  309853 round_trippers.go:580]     Audit-Id: 362a0d1d-df27-4047-af2b-4ab3739b065c
	I0717 22:17:49.490034  309853 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 22:17:49.490043  309853 round_trippers.go:580]     Content-Type: application/json
	I0717 22:17:49.490054  309853 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 045ba5ca-4555-4e04-88ae-de81d0857e59
	I0717 22:17:49.490067  309853 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6fdf35a7-d399-4c9c-bae2-8bcfb74183ff
	I0717 22:17:49.490077  309853 round_trippers.go:580]     Date: Mon, 17 Jul 2023 22:17:49 GMT
	I0717 22:17:49.490344  309853 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"multinode-265316","uid":"50c1c1e7-3d65-45f6-a51d-6b8108e33a81","resourceVersion":"414","creationTimestamp":"2023-07-17T22:17:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-265316","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b76e7e219387ed29a8027b03764cb35e04d80ac8","minikube.k8s.io/name":"multinode-265316","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T22_17_15_0700","minikube.k8s.io/version":"v1.31.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12168 chars]
	I0717 22:17:49.491018  309853 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 22:17:49.491035  309853 node_conditions.go:123] node cpu capacity is 8
	I0717 22:17:49.491047  309853 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 22:17:49.491053  309853 node_conditions.go:123] node cpu capacity is 8
	I0717 22:17:49.491062  309853 node_conditions.go:105] duration metric: took 189.324672ms to run NodePressure ...
	I0717 22:17:49.491076  309853 start.go:228] waiting for startup goroutines ...
	I0717 22:17:49.491112  309853 start.go:242] writing updated cluster config ...
	I0717 22:17:49.491514  309853 ssh_runner.go:195] Run: rm -f paused
	I0717 22:17:49.539568  309853 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 22:17:49.543841  309853 out.go:177] * Done! kubectl is now configured to use "multinode-265316" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 22:17:32 multinode-265316 crio[951]: time="2023-07-17 22:17:32.061801927Z" level=info msg="Created container 9bf48411ce723656ce6a5d5ba0cbd4a443780c1802ee9158fe87fc8796f36b2b: kube-system/storage-provisioner/storage-provisioner" id=fce75532-f4f7-46bf-8531-31f08e7e32dd name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 22:17:32 multinode-265316 crio[951]: time="2023-07-17 22:17:32.061829251Z" level=info msg="Starting container: 2fcee45b0d74d1e23943cad75fb1b50ffd9010153d18f3b9208e39f03cc88898" id=f89a59b9-a01a-41a2-a336-cf75f5de6bca name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 22:17:32 multinode-265316 crio[951]: time="2023-07-17 22:17:32.062235169Z" level=info msg="Starting container: 9bf48411ce723656ce6a5d5ba0cbd4a443780c1802ee9158fe87fc8796f36b2b" id=0bd1d958-772d-4fa0-928a-996719ebfd08 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 22:17:32 multinode-265316 crio[951]: time="2023-07-17 22:17:32.070969209Z" level=info msg="Started container" PID=2343 containerID=2fcee45b0d74d1e23943cad75fb1b50ffd9010153d18f3b9208e39f03cc88898 description=kube-system/coredns-5d78c9869d-s4bbn/coredns id=f89a59b9-a01a-41a2-a336-cf75f5de6bca name=/runtime.v1.RuntimeService/StartContainer sandboxID=e5e02d25837adb34e94be6abf056ea339b90e79a9de9337a3bcdbe480a278d4c
	Jul 17 22:17:32 multinode-265316 crio[951]: time="2023-07-17 22:17:32.072376326Z" level=info msg="Started container" PID=2341 containerID=9bf48411ce723656ce6a5d5ba0cbd4a443780c1802ee9158fe87fc8796f36b2b description=kube-system/storage-provisioner/storage-provisioner id=0bd1d958-772d-4fa0-928a-996719ebfd08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a4f42b2b6c9fbd3872f978b1a7a56c276c683a00f9341cb865c2d987ffb2a84
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.615460481Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-dhkzz/POD" id=71a1af51-b697-458b-9ac9-a6b9d27e2ac2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.615539452Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.631441859Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-dhkzz Namespace:default ID:789902451cb43e6f21615a733bbd861f03e3695c1a3d90d100e7fc3d2d10bad7 UID:a8741c75-ca38-4ba7-8288-ae6966dfcce5 NetNS:/var/run/netns/bdc81125-29ee-4b35-9e96-d8a0729d79ed Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.631490110Z" level=info msg="Adding pod default_busybox-67b7f59bb-dhkzz to CNI network \"kindnet\" (type=ptp)"
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.641109191Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-dhkzz Namespace:default ID:789902451cb43e6f21615a733bbd861f03e3695c1a3d90d100e7fc3d2d10bad7 UID:a8741c75-ca38-4ba7-8288-ae6966dfcce5 NetNS:/var/run/netns/bdc81125-29ee-4b35-9e96-d8a0729d79ed Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.641244449Z" level=info msg="Checking pod default_busybox-67b7f59bb-dhkzz for CNI network kindnet (type=ptp)"
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.674572212Z" level=info msg="Ran pod sandbox 789902451cb43e6f21615a733bbd861f03e3695c1a3d90d100e7fc3d2d10bad7 with infra container: default/busybox-67b7f59bb-dhkzz/POD" id=71a1af51-b697-458b-9ac9-a6b9d27e2ac2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.675734021Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=a5763c18-5055-4efd-8e46-e028c7164284 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.675996152Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=a5763c18-5055-4efd-8e46-e028c7164284 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.676775578Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=10798015-1c9b-4b29-82b0-a6aaa5b6c754 name=/runtime.v1.ImageService/PullImage
	Jul 17 22:17:50 multinode-265316 crio[951]: time="2023-07-17 22:17:50.679869249Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 17 22:17:51 multinode-265316 crio[951]: time="2023-07-17 22:17:51.356987475Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 17 22:17:52 multinode-265316 crio[951]: time="2023-07-17 22:17:52.973243255Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=10798015-1c9b-4b29-82b0-a6aaa5b6c754 name=/runtime.v1.ImageService/PullImage
	Jul 17 22:17:52 multinode-265316 crio[951]: time="2023-07-17 22:17:52.974802764Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=969c4428-d4c4-4ae3-b071-94ddbd7c63bb name=/runtime.v1.ImageService/ImageStatus
	Jul 17 22:17:52 multinode-265316 crio[951]: time="2023-07-17 22:17:52.975299535Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=969c4428-d4c4-4ae3-b071-94ddbd7c63bb name=/runtime.v1.ImageService/ImageStatus
	Jul 17 22:17:52 multinode-265316 crio[951]: time="2023-07-17 22:17:52.976151139Z" level=info msg="Creating container: default/busybox-67b7f59bb-dhkzz/busybox" id=7d0f19c6-6665-447d-80ac-0366ab0b2681 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 22:17:52 multinode-265316 crio[951]: time="2023-07-17 22:17:52.976264832Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 22:17:53 multinode-265316 crio[951]: time="2023-07-17 22:17:53.059616560Z" level=info msg="Created container ce682d6686e57d67df2a83663ff7ea6416c7d021f2a60707e061cf4f63f4895c: default/busybox-67b7f59bb-dhkzz/busybox" id=7d0f19c6-6665-447d-80ac-0366ab0b2681 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 22:17:53 multinode-265316 crio[951]: time="2023-07-17 22:17:53.060430524Z" level=info msg="Starting container: ce682d6686e57d67df2a83663ff7ea6416c7d021f2a60707e061cf4f63f4895c" id=8096a253-0a54-40fc-82c4-3dbf9d5ba569 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 22:17:53 multinode-265316 crio[951]: time="2023-07-17 22:17:53.070185955Z" level=info msg="Started container" PID=2511 containerID=ce682d6686e57d67df2a83663ff7ea6416c7d021f2a60707e061cf4f63f4895c description=default/busybox-67b7f59bb-dhkzz/busybox id=8096a253-0a54-40fc-82c4-3dbf9d5ba569 name=/runtime.v1.RuntimeService/StartContainer sandboxID=789902451cb43e6f21615a733bbd861f03e3695c1a3d90d100e7fc3d2d10bad7
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ce682d6686e57       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago       Running             busybox                   0                   789902451cb43       busybox-67b7f59bb-dhkzz
	2fcee45b0d74d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      24 seconds ago      Running             coredns                   0                   e5e02d25837ad       coredns-5d78c9869d-s4bbn
	9bf48411ce723       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      24 seconds ago      Running             storage-provisioner       0                   8a4f42b2b6c9f       storage-provisioner
	d655eb428934c       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      26 seconds ago      Running             kube-proxy                0                   17311f98f64cb       kube-proxy-4cxgd
	77fe44e625e7b       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      26 seconds ago      Running             kindnet-cni               0                   1a1829067b58d       kindnet-29cp4
	b7ccc9a540d05       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      47 seconds ago      Running             kube-scheduler            0                   22b1c53711d96       kube-scheduler-multinode-265316
	e3050441a4f6a       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      47 seconds ago      Running             kube-controller-manager   0                   f0c1a73573ec6       kube-controller-manager-multinode-265316
	d201add4ca155       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      47 seconds ago      Running             etcd                      0                   9022ca9d6dba7       etcd-multinode-265316
	3cfffa8e02b88       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      47 seconds ago      Running             kube-apiserver            0                   b778cca5ccf21       kube-apiserver-multinode-265316
	
	* 
	* ==> coredns [2fcee45b0d74d1e23943cad75fb1b50ffd9010153d18f3b9208e39f03cc88898] <==
	* [INFO] 10.244.1.2:48511 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127677s
	[INFO] 10.244.0.3:59231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092836s
	[INFO] 10.244.0.3:43296 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736926s
	[INFO] 10.244.0.3:47746 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065243s
	[INFO] 10.244.0.3:39697 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058997s
	[INFO] 10.244.0.3:60699 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000950772s
	[INFO] 10.244.0.3:46356 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050616s
	[INFO] 10.244.0.3:56041 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049513s
	[INFO] 10.244.0.3:56402 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046719s
	[INFO] 10.244.1.2:49832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113883s
	[INFO] 10.244.1.2:48834 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080852s
	[INFO] 10.244.1.2:45008 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049183s
	[INFO] 10.244.1.2:39933 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007742s
	[INFO] 10.244.0.3:47754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104512s
	[INFO] 10.244.0.3:34811 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076997s
	[INFO] 10.244.0.3:45902 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000039503s
	[INFO] 10.244.0.3:54895 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062636s
	[INFO] 10.244.1.2:37511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118545s
	[INFO] 10.244.1.2:52109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000139027s
	[INFO] 10.244.1.2:39503 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147523s
	[INFO] 10.244.1.2:41609 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008331s
	[INFO] 10.244.0.3:43835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
	[INFO] 10.244.0.3:57792 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000065002s
	[INFO] 10.244.0.3:40553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078073s
	[INFO] 10.244.0.3:46551 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073578s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-265316
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-265316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=multinode-265316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T22_17_15_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:17:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-265316
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:17:31 +0000   Mon, 17 Jul 2023 22:17:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:17:31 +0000   Mon, 17 Jul 2023 22:17:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:17:31 +0000   Mon, 17 Jul 2023 22:17:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:17:31 +0000   Mon, 17 Jul 2023 22:17:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-265316
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 290e36d69423458089ca6aa0da5ba8b2
	  System UUID:                b8437113-3ed3-484c-83ef-bb970a37d5f0
	  Boot ID:                    7db0a284-d4e9-48b4-92fc-f96afb04e8db
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-dhkzz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5d78c9869d-s4bbn                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     27s
	  kube-system                 etcd-multinode-265316                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         41s
	  kube-system                 kindnet-29cp4                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      27s
	  kube-system                 kube-apiserver-multinode-265316             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-controller-manager-multinode-265316    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-proxy-4cxgd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-scheduler-multinode-265316             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node multinode-265316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node multinode-265316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node multinode-265316 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node multinode-265316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node multinode-265316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node multinode-265316 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node multinode-265316 event: Registered Node multinode-265316 in Controller
	  Normal  NodeReady                25s                kubelet          Node multinode-265316 status is now: NodeReady
	
	
	Name:               multinode-265316-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-265316-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:17:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-265316-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:17:47 +0000   Mon, 17 Jul 2023 22:17:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:17:47 +0000   Mon, 17 Jul 2023 22:17:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:17:47 +0000   Mon, 17 Jul 2023 22:17:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 22:17:47 +0000   Mon, 17 Jul 2023 22:17:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-265316-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1ed792667bd4b0b98600c99e39e1bbd
	  System UUID:                cb6ebe95-6659-4e8d-a4bb-226843400290
	  Boot ID:                    7db0a284-d4e9-48b4-92fc-f96afb04e8db
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-chlgz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-w5ss9              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11s
	  kube-system                 kube-proxy-mgwvn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  11s (x5 over 12s)  kubelet          Node multinode-265316-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x5 over 12s)  kubelet          Node multinode-265316-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x5 over 12s)  kubelet          Node multinode-265316-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9s                 kubelet          Node multinode-265316-m02 status is now: NodeReady
	  Normal  RegisteredNode           8s                 node-controller  Node multinode-265316-m02 event: Registered Node multinode-265316-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.007355] FS-Cache: O-key=[8] 'ffa00f0200000000'
	[  +0.004920] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006588] FS-Cache: N-cookie d=00000000d2af0321{9p.inode} n=0000000037715637
	[  +0.007353] FS-Cache: N-key=[8] 'ffa00f0200000000'
	[  +3.061117] FS-Cache: Duplicate cookie detected
	[  +0.004859] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006901] FS-Cache: O-cookie d=00000000313c8b61{9P.session} n=00000000be6062ae
	[  +0.007702] FS-Cache: O-key=[10] '34323936353335363438'
	[  +0.006739] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007821] FS-Cache: N-cookie d=00000000313c8b61{9P.session} n=000000006adef14a
	[  +0.008922] FS-Cache: N-key=[10] '34323936353335363438'
	[Jul17 22:09] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[  +1.031915] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[  +2.015858] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[  +4.255723] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[Jul17 22:10] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[ +16.130862] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	[ +32.505735] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 2a 62 39 8f f8 33 ae 79 92 20 6c b1 08 00
	
	* 
	* ==> etcd [d201add4ca1553adfe6aea40174e21b2b055df9f7844adb5d05020c6d0cca8d3] <==
	* {"level":"info","ts":"2023-07-17T22:17:09.788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-07-17T22:17:09.788Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-07-17T22:17:09.789Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T22:17:09.789Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-17T22:17:09.789Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-17T22:17:09.789Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:17:09.789Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:17:10.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T22:17:10.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T22:17:10.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-07-17T22:17:10.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:17:10.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-17T22:17:10.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-07-17T22:17:10.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-17T22:17:10.181Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:17:10.182Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:17:10.182Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-265316 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:17:10.182Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:17:10.182Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:17:10.182Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:17:10.182Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:17:10.182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:17:10.182Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:17:10.183Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-07-17T22:17:10.183Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:17:56 up  2:00,  0 users,  load average: 1.04, 1.13, 1.00
	Linux multinode-265316 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [77fe44e625e7b4c7815616aa8061772903031a8342684784543f6e567df9daad] <==
	* I0717 22:17:29.964889       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 22:17:29.965128       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0717 22:17:29.965293       1 main.go:116] setting mtu 1500 for CNI 
	I0717 22:17:29.965348       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 22:17:29.965400       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 22:17:30.166752       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0717 22:17:30.167045       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0717 22:17:31.175028       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 22:17:31.175062       1 main.go:227] handling current node
	I0717 22:17:41.186859       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 22:17:41.186882       1 main.go:227] handling current node
	I0717 22:17:51.199146       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 22:17:51.199177       1 main.go:227] handling current node
	I0717 22:17:51.199189       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0717 22:17:51.199195       1 main.go:250] Node multinode-265316-m02 has CIDR [10.244.1.0/24] 
	I0717 22:17:51.199371       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [3cfffa8e02b88add1925b5ac270d48daef7bfd03d826f21c24adb900f3b16d31] <==
	* I0717 22:17:12.160552       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 22:17:12.160957       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 22:17:12.161072       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 22:17:12.160983       1 aggregator.go:152] initial CRD sync complete...
	I0717 22:17:12.161607       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 22:17:12.161639       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 22:17:12.161690       1 cache.go:39] Caches are synced for autoregister controller
	I0717 22:17:12.169963       1 controller.go:624] quota admission added evaluator for: namespaces
	I0717 22:17:12.269646       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 22:17:12.788548       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 22:17:13.017640       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 22:17:13.021182       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 22:17:13.021200       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 22:17:13.415633       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 22:17:13.449175       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 22:17:13.582563       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 22:17:13.588127       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0717 22:17:13.589026       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 22:17:13.592841       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 22:17:14.173974       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 22:17:14.942329       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 22:17:14.953197       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 22:17:14.964552       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 22:17:28.986848       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0717 22:17:29.188871       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [e3050441a4f6af71e90d9d4c736acf31e269d3bd6e53c34561c6cc757d0f7ad6] <==
	* I0717 22:17:28.533553       1 shared_informer.go:318] Caches are synced for persistent volume
	I0717 22:17:28.535898       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 22:17:28.536521       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:17:28.586233       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 22:17:28.904641       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:17:28.983306       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 22:17:28.983343       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 22:17:28.991095       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0717 22:17:29.091029       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 22:17:29.274260       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4cxgd"
	I0717 22:17:29.360331       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-29cp4"
	I0717 22:17:29.463310       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-hnbnm"
	I0717 22:17:29.472908       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-s4bbn"
	I0717 22:17:29.572191       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-hnbnm"
	I0717 22:17:33.382628       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0717 22:17:45.869899       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-265316-m02\" does not exist"
	I0717 22:17:45.877032       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-265316-m02" podCIDRs=[10.244.1.0/24]
	I0717 22:17:45.882202       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w5ss9"
	I0717 22:17:45.882233       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mgwvn"
	W0717 22:17:47.822049       1 topologycache.go:232] Can't get CPU or zone information for multinode-265316-m02 node
	I0717 22:17:48.384658       1 event.go:307] "Event occurred" object="multinode-265316-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-265316-m02 event: Registered Node multinode-265316-m02 in Controller"
	I0717 22:17:48.384674       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-265316-m02"
	I0717 22:17:50.295486       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0717 22:17:50.303456       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-chlgz"
	I0717 22:17:50.308502       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-dhkzz"
	
	* 
	* ==> kube-proxy [d655eb428934c22ae1c4611421a1adefb7ad1c03f6ce35cdb53e14977478a2a0] <==
	* I0717 22:17:29.989395       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0717 22:17:29.989467       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0717 22:17:29.989489       1 server_others.go:554] "Using iptables proxy"
	I0717 22:17:30.007827       1 server_others.go:192] "Using iptables Proxier"
	I0717 22:17:30.007856       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 22:17:30.007863       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 22:17:30.007878       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 22:17:30.007910       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 22:17:30.008538       1 server.go:658] "Version info" version="v1.27.3"
	I0717 22:17:30.008561       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:17:30.009115       1 config.go:188] "Starting service config controller"
	I0717 22:17:30.009140       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 22:17:30.009141       1 config.go:97] "Starting endpoint slice config controller"
	I0717 22:17:30.009158       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 22:17:30.009166       1 config.go:315] "Starting node config controller"
	I0717 22:17:30.009172       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 22:17:30.109245       1 shared_informer.go:318] Caches are synced for node config
	I0717 22:17:30.109259       1 shared_informer.go:318] Caches are synced for service config
	I0717 22:17:30.109320       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [b7ccc9a540d05f3fbcbf60228ec3d83e74270e207e8133c96845270f2b7f53e0] <==
	* W0717 22:17:12.265911       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 22:17:12.265937       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:17:12.265948       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 22:17:12.265963       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 22:17:12.265982       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:17:12.265987       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:17:12.265997       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 22:17:12.265998       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:17:12.266009       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 22:17:12.266020       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 22:17:13.088994       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 22:17:13.089039       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 22:17:13.113380       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 22:17:13.113422       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 22:17:13.173030       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 22:17:13.173075       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 22:17:13.178456       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 22:17:13.178494       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 22:17:13.257101       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 22:17:13.257141       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 22:17:13.277003       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 22:17:13.277042       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 22:17:13.405860       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 22:17:13.405894       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 22:17:16.463006       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 22:17:29 multinode-265316 kubelet[1585]: I0717 22:17:29.363689    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1297fe2e-d86e-4494-a6ac-e8b95b9ef84a-xtables-lock\") pod \"kube-proxy-4cxgd\" (UID: \"1297fe2e-d86e-4494-a6ac-e8b95b9ef84a\") " pod="kube-system/kube-proxy-4cxgd"
	Jul 17 22:17:29 multinode-265316 kubelet[1585]: I0717 22:17:29.366339    1585 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 22:17:29 multinode-265316 kubelet[1585]: I0717 22:17:29.464687    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ea3df0-45d7-4c70-838e-12f1b43f9179-lib-modules\") pod \"kindnet-29cp4\" (UID: \"00ea3df0-45d7-4c70-838e-12f1b43f9179\") " pod="kube-system/kindnet-29cp4"
	Jul 17 22:17:29 multinode-265316 kubelet[1585]: I0717 22:17:29.464772    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/00ea3df0-45d7-4c70-838e-12f1b43f9179-cni-cfg\") pod \"kindnet-29cp4\" (UID: \"00ea3df0-45d7-4c70-838e-12f1b43f9179\") " pod="kube-system/kindnet-29cp4"
	Jul 17 22:17:29 multinode-265316 kubelet[1585]: I0717 22:17:29.464854    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5llk\" (UniqueName: \"kubernetes.io/projected/00ea3df0-45d7-4c70-838e-12f1b43f9179-kube-api-access-c5llk\") pod \"kindnet-29cp4\" (UID: \"00ea3df0-45d7-4c70-838e-12f1b43f9179\") " pod="kube-system/kindnet-29cp4"
	Jul 17 22:17:29 multinode-265316 kubelet[1585]: I0717 22:17:29.464986    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00ea3df0-45d7-4c70-838e-12f1b43f9179-xtables-lock\") pod \"kindnet-29cp4\" (UID: \"00ea3df0-45d7-4c70-838e-12f1b43f9179\") " pod="kube-system/kindnet-29cp4"
	Jul 17 22:17:29 multinode-265316 kubelet[1585]: W0717 22:17:29.700369    1585 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/crio-17311f98f64cb01d409ab5192257201078317007c9d1857b8f53e342f4435624 WatchSource:0}: Error finding container 17311f98f64cb01d409ab5192257201078317007c9d1857b8f53e342f4435624: Status 404 returned error can't find the container with id 17311f98f64cb01d409ab5192257201078317007c9d1857b8f53e342f4435624
	Jul 17 22:17:29 multinode-265316 kubelet[1585]: W0717 22:17:29.700623    1585 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/crio-1a1829067b58d389a86084aa40bc58eb1a722c82f293811561402f4e3b42aaf1 WatchSource:0}: Error finding container 1a1829067b58d389a86084aa40bc58eb1a722c82f293811561402f4e3b42aaf1: Status 404 returned error can't find the container with id 1a1829067b58d389a86084aa40bc58eb1a722c82f293811561402f4e3b42aaf1
	Jul 17 22:17:30 multinode-265316 kubelet[1585]: I0717 22:17:30.175566    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4cxgd" podStartSLOduration=1.175522406 podCreationTimestamp="2023-07-17 22:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 22:17:30.17531271 +0000 UTC m=+15.257828559" watchObservedRunningTime="2023-07-17 22:17:30.175522406 +0000 UTC m=+15.258038259"
	Jul 17 22:17:30 multinode-265316 kubelet[1585]: I0717 22:17:30.186135    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-29cp4" podStartSLOduration=1.186081799 podCreationTimestamp="2023-07-17 22:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 22:17:30.186028849 +0000 UTC m=+15.268544699" watchObservedRunningTime="2023-07-17 22:17:30.186081799 +0000 UTC m=+15.268597651"
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: I0717 22:17:31.617899    1585 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: I0717 22:17:31.640492    1585 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: I0717 22:17:31.640691    1585 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: I0717 22:17:31.686501    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/030a51da-3ea5-4a58-8f1b-452efc02de5c-tmp\") pod \"storage-provisioner\" (UID: \"030a51da-3ea5-4a58-8f1b-452efc02de5c\") " pod="kube-system/storage-provisioner"
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: I0717 22:17:31.686562    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxg2z\" (UniqueName: \"kubernetes.io/projected/f5bd6a07-4ec0-46cb-8b1e-ef5178a23919-kube-api-access-zxg2z\") pod \"coredns-5d78c9869d-s4bbn\" (UID: \"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919\") " pod="kube-system/coredns-5d78c9869d-s4bbn"
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: I0717 22:17:31.686603    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld9wk\" (UniqueName: \"kubernetes.io/projected/030a51da-3ea5-4a58-8f1b-452efc02de5c-kube-api-access-ld9wk\") pod \"storage-provisioner\" (UID: \"030a51da-3ea5-4a58-8f1b-452efc02de5c\") " pod="kube-system/storage-provisioner"
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: I0717 22:17:31.686668    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bd6a07-4ec0-46cb-8b1e-ef5178a23919-config-volume\") pod \"coredns-5d78c9869d-s4bbn\" (UID: \"f5bd6a07-4ec0-46cb-8b1e-ef5178a23919\") " pod="kube-system/coredns-5d78c9869d-s4bbn"
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: W0717 22:17:31.996234    1585 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/crio-8a4f42b2b6c9fbd3872f978b1a7a56c276c683a00f9341cb865c2d987ffb2a84 WatchSource:0}: Error finding container 8a4f42b2b6c9fbd3872f978b1a7a56c276c683a00f9341cb865c2d987ffb2a84: Status 404 returned error can't find the container with id 8a4f42b2b6c9fbd3872f978b1a7a56c276c683a00f9341cb865c2d987ffb2a84
	Jul 17 22:17:31 multinode-265316 kubelet[1585]: W0717 22:17:31.996537    1585 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/crio-e5e02d25837adb34e94be6abf056ea339b90e79a9de9337a3bcdbe480a278d4c WatchSource:0}: Error finding container e5e02d25837adb34e94be6abf056ea339b90e79a9de9337a3bcdbe480a278d4c: Status 404 returned error can't find the container with id e5e02d25837adb34e94be6abf056ea339b90e79a9de9337a3bcdbe480a278d4c
	Jul 17 22:17:32 multinode-265316 kubelet[1585]: I0717 22:17:32.181553    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.181507246 podCreationTimestamp="2023-07-17 22:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 22:17:32.18125723 +0000 UTC m=+17.263773082" watchObservedRunningTime="2023-07-17 22:17:32.181507246 +0000 UTC m=+17.264023100"
	Jul 17 22:17:32 multinode-265316 kubelet[1585]: I0717 22:17:32.192554    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-s4bbn" podStartSLOduration=3.192455501 podCreationTimestamp="2023-07-17 22:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 22:17:32.192104489 +0000 UTC m=+17.274620341" watchObservedRunningTime="2023-07-17 22:17:32.192455501 +0000 UTC m=+17.274971352"
	Jul 17 22:17:50 multinode-265316 kubelet[1585]: I0717 22:17:50.312884    1585 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 22:17:50 multinode-265316 kubelet[1585]: I0717 22:17:50.403919    1585 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frfvt\" (UniqueName: \"kubernetes.io/projected/a8741c75-ca38-4ba7-8288-ae6966dfcce5-kube-api-access-frfvt\") pod \"busybox-67b7f59bb-dhkzz\" (UID: \"a8741c75-ca38-4ba7-8288-ae6966dfcce5\") " pod="default/busybox-67b7f59bb-dhkzz"
	Jul 17 22:17:50 multinode-265316 kubelet[1585]: W0717 22:17:50.672268    1585 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/crio-789902451cb43e6f21615a733bbd861f03e3695c1a3d90d100e7fc3d2d10bad7 WatchSource:0}: Error finding container 789902451cb43e6f21615a733bbd861f03e3695c1a3d90d100e7fc3d2d10bad7: Status 404 returned error can't find the container with id 789902451cb43e6f21615a733bbd861f03e3695c1a3d90d100e7fc3d2d10bad7
	Jul 17 22:17:53 multinode-265316 kubelet[1585]: I0717 22:17:53.222619    1585 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-dhkzz" podStartSLOduration=0.925083924 podCreationTimestamp="2023-07-17 22:17:50 +0000 UTC" firstStartedPulling="2023-07-17 22:17:50.676196592 +0000 UTC m=+35.758712442" lastFinishedPulling="2023-07-17 22:17:52.973692861 +0000 UTC m=+38.056208695" observedRunningTime="2023-07-17 22:17:53.222253129 +0000 UTC m=+38.304768988" watchObservedRunningTime="2023-07-17 22:17:53.222580177 +0000 UTC m=+38.305096029"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-265316 -n multinode-265316
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-265316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (118.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.2561505772.exe start -p running-upgrade-194024 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.2561505772.exe start -p running-upgrade-194024 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m50.161228269s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-194024 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-194024 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.450333254s)

                                                
                                                
-- stdout --
	* [running-upgrade-194024] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-194024 in cluster running-upgrade-194024
	* Pulling base image ...
	* Updating the running docker "running-upgrade-194024" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:28:53.746738  385026 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:28:53.746887  385026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:28:53.746900  385026 out.go:309] Setting ErrFile to fd 2...
	I0717 22:28:53.746906  385026 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:28:53.747197  385026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:28:53.747904  385026 out.go:303] Setting JSON to false
	I0717 22:28:53.749774  385026 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7878,"bootTime":1689625056,"procs":663,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:28:53.749873  385026 start.go:138] virtualization: kvm guest
	I0717 22:28:53.752512  385026 out.go:177] * [running-upgrade-194024] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:28:53.754483  385026 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:28:53.754518  385026 notify.go:220] Checking for updates...
	I0717 22:28:53.757730  385026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:28:53.760362  385026 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:28:53.763146  385026 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 22:28:53.765637  385026 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:28:53.767321  385026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:28:53.769324  385026 config.go:182] Loaded profile config "running-upgrade-194024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 22:28:53.769367  385026 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:28:53.771448  385026 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 22:28:53.772857  385026 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:28:53.802815  385026 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:28:53.802921  385026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:28:53.907570  385026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:119 OomKillDisable:true NGoroutines:105 SystemTime:2023-07-17 22:28:53.890129785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arc
hitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:28:53.907711  385026 docker.go:294] overlay module found
	I0717 22:28:53.910893  385026 out.go:177] * Using the docker driver based on existing profile
	I0717 22:28:53.912704  385026 start.go:298] selected driver: docker
	I0717 22:28:53.912727  385026 start.go:880] validating driver "docker" against &{Name:running-upgrade-194024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-194024 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:28:53.912849  385026 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:28:53.913940  385026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:28:54.009390  385026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:125 OomKillDisable:true NGoroutines:113 SystemTime:2023-07-17 22:28:53.99684667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:28:54.009695  385026 cni.go:84] Creating CNI manager for ""
	I0717 22:28:54.009711  385026 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 22:28:54.009721  385026 start_flags.go:319] config:
	{Name:running-upgrade-194024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-194024 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:28:54.014627  385026 out.go:177] * Starting control plane node running-upgrade-194024 in cluster running-upgrade-194024
	I0717 22:28:54.016062  385026 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 22:28:54.017542  385026 out.go:177] * Pulling base image ...
	I0717 22:28:54.019757  385026 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0717 22:28:54.019875  385026 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:28:54.046020  385026 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 22:28:54.046049  385026 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0717 22:28:54.278025  385026 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 22:28:54.278221  385026 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/running-upgrade-194024/config.json ...
	I0717 22:28:54.278354  385026 cache.go:107] acquiring lock: {Name:mkca6f29a1b606796b9db67ee9b8bd55cd7c498b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.278396  385026 cache.go:107] acquiring lock: {Name:mk6e6299b54ac5005fb9824ce5f3019cd16eea3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.278497  385026 cache.go:107] acquiring lock: {Name:mkc66cb3bef8695226a3efb17dddd1fe2af439fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.278525  385026 cache.go:107] acquiring lock: {Name:mkd48c52dda5f619f1d607192c9e10d5385a8483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.278617  385026 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 22:28:54.278636  385026 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 22:28:54.278510  385026 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 22:28:54.278823  385026 cache.go:107] acquiring lock: {Name:mk63d343a754d6d789dbda598290c3ac15744223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.278836  385026 cache.go:107] acquiring lock: {Name:mk793b40a5c2fff98a889b50e408469b732cf2c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.278851  385026 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 506.512µs
	I0717 22:28:54.278867  385026 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 22:28:54.278517  385026 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:28:54.278875  385026 cache.go:107] acquiring lock: {Name:mk54e82cc18ef6777061ef66b803fb7d59d19274 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.278895  385026 start.go:365] acquiring machines lock for running-upgrade-194024: {Name:mk18cb20df345c6b28d54eb84aaa9f071e1975e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.278943  385026 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:28:54.278978  385026 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 22:28:54.278984  385026 start.go:369] acquired machines lock for "running-upgrade-194024" in 71.623µs
	I0717 22:28:54.279002  385026 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:28:54.279014  385026 fix.go:54] fixHost starting: m01
	I0717 22:28:54.279090  385026 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 22:28:54.279120  385026 cache.go:107] acquiring lock: {Name:mkb75ba42f9cdb24da4e9c92034c1de85d4957e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:54.279208  385026 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 22:28:54.279295  385026 cli_runner.go:164] Run: docker container inspect running-upgrade-194024 --format={{.State.Status}}
	I0717 22:28:54.279538  385026 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 22:28:54.280276  385026 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 22:28:54.280280  385026 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 22:28:54.280416  385026 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 22:28:54.280433  385026 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 22:28:54.280591  385026 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 22:28:54.280728  385026 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 22:28:54.280841  385026 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 22:28:54.321781  385026 fix.go:102] recreateIfNeeded on running-upgrade-194024: state=Running err=<nil>
	W0717 22:28:54.321820  385026 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:28:54.324790  385026 out.go:177] * Updating the running docker "running-upgrade-194024" container ...
	I0717 22:28:54.326288  385026 machine.go:88] provisioning docker machine ...
	I0717 22:28:54.326332  385026 ubuntu.go:169] provisioning hostname "running-upgrade-194024"
	I0717 22:28:54.326404  385026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-194024
	I0717 22:28:54.350525  385026 main.go:141] libmachine: Using SSH client type: native
	I0717 22:28:54.351015  385026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I0717 22:28:54.351038  385026 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-194024 && echo "running-upgrade-194024" | sudo tee /etc/hostname
	I0717 22:28:54.468141  385026 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 22:28:54.495057  385026 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 22:28:54.497901  385026 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-194024
	
	I0717 22:28:54.497996  385026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-194024
	I0717 22:28:54.508147  385026 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 22:28:54.517553  385026 main.go:141] libmachine: Using SSH client type: native
	I0717 22:28:54.517983  385026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I0717 22:28:54.518003  385026 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-194024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-194024/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-194024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:28:54.536142  385026 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0717 22:28:54.536923  385026 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0717 22:28:54.549370  385026 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0717 22:28:54.550511  385026 cache.go:162] opening:  /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0717 22:28:54.615208  385026 cache.go:157] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0717 22:28:54.615238  385026 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 336.121374ms
	I0717 22:28:54.615255  385026 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0717 22:28:54.632372  385026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:28:54.632433  385026 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-218877/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-218877/.minikube}
	I0717 22:28:54.632463  385026 ubuntu.go:177] setting up certificates
	I0717 22:28:54.632484  385026 provision.go:83] configureAuth start
	I0717 22:28:54.632543  385026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-194024
	I0717 22:28:54.654057  385026 provision.go:138] copyHostCerts
	I0717 22:28:54.654124  385026 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem, removing ...
	I0717 22:28:54.654133  385026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem
	I0717 22:28:54.654632  385026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem (1078 bytes)
	I0717 22:28:54.654838  385026 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem, removing ...
	I0717 22:28:54.654854  385026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem
	I0717 22:28:54.654896  385026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem (1123 bytes)
	I0717 22:28:54.654975  385026 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem, removing ...
	I0717 22:28:54.654985  385026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem
	I0717 22:28:54.655016  385026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem (1679 bytes)
	I0717 22:28:54.655083  385026 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-194024 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-194024]
	I0717 22:28:54.818423  385026 provision.go:172] copyRemoteCerts
	I0717 22:28:54.818496  385026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:28:54.818550  385026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-194024
	I0717 22:28:54.856304  385026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/running-upgrade-194024/id_rsa Username:docker}
	I0717 22:28:54.977258  385026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:28:55.009436  385026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:28:55.037318  385026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:28:55.074012  385026 provision.go:86] duration metric: configureAuth took 441.510724ms
	I0717 22:28:55.074042  385026 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:28:55.074253  385026 config.go:182] Loaded profile config "running-upgrade-194024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 22:28:55.074507  385026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-194024
	I0717 22:28:55.099239  385026 cache.go:157] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0717 22:28:55.099307  385026 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 820.472232ms
	I0717 22:28:55.099323  385026 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0717 22:28:55.103532  385026 main.go:141] libmachine: Using SSH client type: native
	I0717 22:28:55.103944  385026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I0717 22:28:55.103957  385026 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:28:55.671839  385026 cache.go:157] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0717 22:28:55.671872  385026 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.39337963s
	I0717 22:28:55.671890  385026 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0717 22:28:55.751725  385026 cache.go:157] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0717 22:28:55.751750  385026 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.473380801s
	I0717 22:28:55.751762  385026 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0717 22:28:55.756135  385026 cache.go:157] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0717 22:28:55.756226  385026 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.47770068s
	I0717 22:28:55.756270  385026 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0717 22:28:55.773408  385026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:28:55.773441  385026 machine.go:91] provisioned docker machine in 1.447132555s
	I0717 22:28:55.773452  385026 start.go:300] post-start starting for "running-upgrade-194024" (driver="docker")
	I0717 22:28:55.773464  385026 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:28:55.773545  385026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:28:55.773595  385026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-194024
	I0717 22:28:55.795885  385026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/running-upgrade-194024/id_rsa Username:docker}
	I0717 22:28:55.897491  385026 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:28:55.901749  385026 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:28:55.901784  385026 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:28:55.901799  385026 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:28:55.901807  385026 info.go:137] Remote host: Ubuntu 19.10
	I0717 22:28:55.901820  385026 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/addons for local assets ...
	I0717 22:28:55.901881  385026 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/files for local assets ...
	I0717 22:28:55.901976  385026 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> 2256422.pem in /etc/ssl/certs
	I0717 22:28:55.902095  385026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:28:55.910182  385026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem --> /etc/ssl/certs/2256422.pem (1708 bytes)
	I0717 22:28:55.929146  385026 start.go:303] post-start completed in 155.673413ms
	I0717 22:28:55.929245  385026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:28:55.929289  385026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-194024
	I0717 22:28:55.949363  385026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/running-upgrade-194024/id_rsa Username:docker}
	I0717 22:28:56.036601  385026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:28:56.041293  385026 fix.go:56] fixHost completed within 1.762254461s
	I0717 22:28:56.041322  385026 start.go:83] releasing machines lock for "running-upgrade-194024", held for 1.762326156s
	I0717 22:28:56.041406  385026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-194024
	I0717 22:28:56.061534  385026 ssh_runner.go:195] Run: cat /version.json
	I0717 22:28:56.061592  385026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-194024
	I0717 22:28:56.061694  385026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:28:56.061773  385026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-194024
	I0717 22:28:56.082923  385026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/running-upgrade-194024/id_rsa Username:docker}
	I0717 22:28:56.090787  385026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/running-upgrade-194024/id_rsa Username:docker}
	W0717 22:28:56.167126  385026 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 22:28:56.371528  385026 cache.go:157] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 22:28:56.371562  385026 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.092755252s
	I0717 22:28:56.371578  385026 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 22:28:56.402095  385026 cache.go:157] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0717 22:28:56.402129  385026 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 2.123253564s
	I0717 22:28:56.402179  385026 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0717 22:28:56.402203  385026 cache.go:87] Successfully saved all images to host disk.
	I0717 22:28:56.402256  385026 ssh_runner.go:195] Run: systemctl --version
	I0717 22:28:56.406698  385026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:28:56.452313  385026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:28:56.456609  385026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:28:56.515644  385026 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:28:56.515735  385026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:28:56.541132  385026 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:28:56.541161  385026 start.go:466] detecting cgroup driver to use...
	I0717 22:28:56.541199  385026 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:28:56.541252  385026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:28:56.564008  385026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:28:56.574656  385026 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:28:56.574717  385026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:28:56.586483  385026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:28:56.608683  385026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 22:28:56.620192  385026 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 22:28:56.620260  385026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:28:56.762376  385026 docker.go:212] disabling docker service ...
	I0717 22:28:56.762465  385026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:28:56.776285  385026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:28:56.851607  385026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:28:56.956689  385026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:28:57.078614  385026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:28:57.098808  385026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:28:57.121747  385026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 22:28:57.121807  385026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:28:57.132472  385026 out.go:177] 
	W0717 22:28:57.134472  385026 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 22:28:57.134509  385026 out.go:239] * 
	* 
	W0717 22:28:57.135926  385026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:28:57.137416  385026 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-194024 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-17 22:28:57.158507662 +0000 UTC m=+1882.261585139
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-194024
helpers_test.go:235: (dbg) docker inspect running-upgrade-194024:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "adb386e67a35948f71679bef9be0b26b23d259f4aed8f46861f67f5128f8b03a",
	        "Created": "2023-07-17T22:27:30.515289809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 361068,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:27:33.419698093Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/adb386e67a35948f71679bef9be0b26b23d259f4aed8f46861f67f5128f8b03a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/adb386e67a35948f71679bef9be0b26b23d259f4aed8f46861f67f5128f8b03a/hostname",
	        "HostsPath": "/var/lib/docker/containers/adb386e67a35948f71679bef9be0b26b23d259f4aed8f46861f67f5128f8b03a/hosts",
	        "LogPath": "/var/lib/docker/containers/adb386e67a35948f71679bef9be0b26b23d259f4aed8f46861f67f5128f8b03a/adb386e67a35948f71679bef9be0b26b23d259f4aed8f46861f67f5128f8b03a-json.log",
	        "Name": "/running-upgrade-194024",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-194024:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b850569f016dbeae28ce7c8ae44e8074c6c0eda05e6e3bb869d8494e4da4c153-init/diff:/var/lib/docker/overlay2/4bdf9bc437e14ebe276ad78ff25aefaff0365a1eb82873e70239b08611d39fc3/diff:/var/lib/docker/overlay2/362fef174f8d8ee66effd38d33d7aea5e259cf8adeee81ba598fd86182dcf239/diff:/var/lib/docker/overlay2/feeed51bb80dec37ed724e7aaf021314832a7b5ac07a4b3165ac63d987164a70/diff:/var/lib/docker/overlay2/3dd10573e7cb19fd02973f6ec54b91db981f4bbc09ab107664305379e8562876/diff:/var/lib/docker/overlay2/f5cabab4da1bcf757259b8671dfdc9fbc10fa34579b077fe0bf8d1e84551890a/diff:/var/lib/docker/overlay2/4c9fd61006e8e7011a6ee01cfb6dec495107669a5dbb22ea458780a30baf0daa/diff:/var/lib/docker/overlay2/cfb1a35c176968874be39186db74d35869ddbc2b398b1a1b683baff813519a63/diff:/var/lib/docker/overlay2/52a4a60614d63f6c06c71c8cab4d33c6ec757d9456980ae4c1322d8a11d8d7de/diff:/var/lib/docker/overlay2/0ea7f49528cb2c386ab1ea3bd48b0f7e976b367039b8d5d13a1a130e0a694373/diff:/var/lib/docker/overlay2/766455
2fb22fb52502a5760376fb74c6a9ca7dfa2f250d66b91d972fb7d355da/diff:/var/lib/docker/overlay2/419a848e70d2ac8030e4ef078847c18b064a6e5100730c7c15ac0347de9a5215/diff:/var/lib/docker/overlay2/fde5ed5dbe073db7b65b46657b9d2f1b8b7997b9ad41211ba085805ba94182f3/diff:/var/lib/docker/overlay2/d2c7185f213ff7dbc24968a6cd917fa1990668c5c971e858fd7165fa82935fcc/diff:/var/lib/docker/overlay2/58b72da9b7d07a44e5490b7a5b00a562109c252c7ae8cbd4ea5e914e708e306c/diff:/var/lib/docker/overlay2/04ba9dd5941005961af8f151b184add7aa2c2e063ccbf52d38d5f3996d09bcb6/diff:/var/lib/docker/overlay2/36da71706d75bbbe1adc73f77ba12b6f95b0debbc3b432c424943b7c4e6d9669/diff:/var/lib/docker/overlay2/42ebbf18d67331936ebf2506af523fe8624ef9c2cde21c9ba39306fc10d905ad/diff:/var/lib/docker/overlay2/6182cd2b70e6042bbdd94669332b094a874d66f4848255776a7b2822e144e4a9/diff:/var/lib/docker/overlay2/f42f8932542a9f5d02f616e5c9c959b8937256d3f723dfad72b1b1f79fc8bc5a/diff:/var/lib/docker/overlay2/61966b5eac29a3628f191eb0f6218771e0e0ab74f21392d7be319c51c866f51f/diff:/var/lib/d
ocker/overlay2/a52ad41f2a873c615833e2e45e97ae4e9df6927410803918d375f4d04916c788/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b850569f016dbeae28ce7c8ae44e8074c6c0eda05e6e3bb869d8494e4da4c153/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b850569f016dbeae28ce7c8ae44e8074c6c0eda05e6e3bb869d8494e4da4c153/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b850569f016dbeae28ce7c8ae44e8074c6c0eda05e6e3bb869d8494e4da4c153/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-194024",
	                "Source": "/var/lib/docker/volumes/running-upgrade-194024/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-194024",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-194024",
	                "name.minikube.sigs.k8s.io": "running-upgrade-194024",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e11046a548513746817b073ca24e0a49aa69b3bac92ac991386e9651ea8d52b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32925"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32924"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32923"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5e11046a5485",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "9284b8360b09feaeb2c3a667cb71750032b9efbb1ee0ed10648301089080c47b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "30e9569af48944776c1a9b7dbfa3b2fbe02ee2acfb7e584f9eb448189f805389",
	                    "EndpointID": "9284b8360b09feaeb2c3a667cb71750032b9efbb1ee0ed10648301089080c47b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-194024 -n running-upgrade-194024
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-194024 -n running-upgrade-194024: exit status 4 (380.814866ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:28:57.519373  386883 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-194024" does not appear in /home/jenkins/minikube-integration/16899-218877/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-194024" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-194024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-194024
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-194024: (1.893691215s)
--- FAIL: TestRunningBinaryUpgrade (118.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.3253946262.exe start -p stopped-upgrade-173210 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Non-zero exit: /tmp/minikube-v1.9.0.3253946262.exe start -p stopped-upgrade-173210 --memory=2200 --vm-driver=docker  --container-runtime=crio: exit status 70 (1m25.758780774s)

                                                
                                                
-- stdout --
	! [stopped-upgrade-173210] minikube v1.9.0 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1150068405
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (8 available), Memory=2200MB (32089MB available) ...
	* Preparing Kubernetes v1.18.0 on CRI-O 1.17.0 ...
	  - kubeadm.pod-network-cidr=10.244.0.0/16

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s    > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s    > kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s    > kubelet: 18.37 MiB / 108.01 MiB [---->___________________] 17.01% ? p/s ?    > kubectl: 21.30 MiB / 41.98 MiB [------------>____________] 50.72% ? p/s ?    > kubeadm: 28.36 MiB / 37.96 MiB [------------------>______] 74.69% ? p/s ?    > kubelet: 25.40 MiB / 108.01 MiB [----->__________________] 23.52% ? p/s ?    > kubectl: 22.92 MiB / 41.98 MiB [------------->___________] 54.59% ? p/s ?    > kubeadm: 29.56 MiB / 37.96 MiB [------------------->_____] 77.86% ? p/s ?    > kubelet: 35.08 MiB / 108.01 MiB [------->________________] 32.48% ? p/s ?    > kubectl: 29.08 MiB / 41.98 MiB [----------------->_______] 69.27% ? p/s ?    > kubeadm: 37.81 MiB / 37.96 MiB [------------------------>] 99.58% ? p/s ?    > kubeadm: 37.96 MiB / 37.96 MiB [---------------] 100.00% 94.4
4 MiB p/s 0s    > kubelet: 43.60 MiB / 108.01 MiB [---->______] 40.36% 42.04 MiB p/s ETA 1s    > kubectl: 39.60 MiB / 41.98 MiB [----------->] 94.31% 30.50 MiB p/s ETA 0s    > kubectl: 41.98 MiB / 41.98 MiB [---------------] 100.00% 61.80 MiB p/s 1s    > kubelet: 51.81 MiB / 108.01 MiB [----->_____] 47.97% 42.04 MiB p/s ETA 1s    > kubelet: 60.50 MiB / 108.01 MiB [------>____] 56.02% 42.04 MiB p/s ETA 1s    > kubelet: 71.22 MiB / 108.01 MiB [------->___] 65.94% 42.30 MiB p/s ETA 0s    > kubelet: 78.61 MiB / 108.01 MiB [-------->__] 72.78% 42.30 MiB p/s ETA 0s    > kubelet: 87.36 MiB / 108.01 MiB [-------->__] 80.88% 42.30 MiB p/s ETA 0s    > kubelet: 95.52 MiB / 108.01 MiB [--------->_] 88.43% 42.18 MiB p/s ETA 0s    > kubelet: 105.83 MiB / 108.01 MiB [--------->] 97.98% 42.18 MiB p/s ETA 0s    > kubelet: 108.01 MiB / 108.01 MiB [-------------] 100.00% 51.30 MiB p/s 2s* 
	X Failed to update cluster: updating node: downloading binaries: downloading kubeadm: download failed: https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubeadm.sha256: Failed to open file for checksum: open /home/jenkins/minikube-integration/16899-218877/.minikube/cache/linux/v1.18.0/kubeadm.download: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.3253946262.exe start -p stopped-upgrade-173210 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.3253946262.exe start -p stopped-upgrade-173210 --memory=2200 --vm-driver=docker  --container-runtime=crio: (24.513180899s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.3253946262.exe -p stopped-upgrade-173210 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.9.0.3253946262.exe -p stopped-upgrade-173210 stop: (1.967006873s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-173210 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-173210 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (8.868409148s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-173210] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-173210 in cluster stopped-upgrade-173210
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-173210" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:28:56.829724  386657 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:28:56.829875  386657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:28:56.829885  386657 out.go:309] Setting ErrFile to fd 2...
	I0717 22:28:56.829889  386657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:28:56.830103  386657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:28:56.830736  386657 out.go:303] Setting JSON to false
	I0717 22:28:56.832402  386657 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7881,"bootTime":1689625056,"procs":666,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:28:56.832479  386657 start.go:138] virtualization: kvm guest
	I0717 22:28:56.835144  386657 out.go:177] * [stopped-upgrade-173210] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:28:56.836702  386657 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:28:56.836757  386657 notify.go:220] Checking for updates...
	I0717 22:28:56.838369  386657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:28:56.839897  386657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:28:56.841505  386657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 22:28:56.843099  386657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:28:56.844685  386657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:28:56.848038  386657 config.go:182] Loaded profile config "stopped-upgrade-173210": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 22:28:56.848083  386657 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 22:28:56.850837  386657 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 22:28:56.852416  386657 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:28:56.881130  386657 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:28:56.881220  386657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:28:56.972990  386657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:116 OomKillDisable:true NGoroutines:94 SystemTime:2023-07-17 22:28:56.955683156 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:28:56.973130  386657 docker.go:294] overlay module found
	I0717 22:28:56.975525  386657 out.go:177] * Using the docker driver based on existing profile
	I0717 22:28:56.977141  386657 start.go:298] selected driver: docker
	I0717 22:28:56.977158  386657 start.go:880] validating driver "docker" against &{Name:stopped-upgrade-173210 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-173210 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:28:56.977287  386657 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:28:56.980514  386657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:28:57.065497  386657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:116 OomKillDisable:true NGoroutines:94 SystemTime:2023-07-17 22:28:57.051860766 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:28:57.065900  386657 cni.go:84] Creating CNI manager for ""
	I0717 22:28:57.065914  386657 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 22:28:57.065922  386657 start_flags.go:319] config:
	{Name:stopped-upgrade-173210 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-173210 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:28:57.068748  386657 out.go:177] * Starting control plane node stopped-upgrade-173210 in cluster stopped-upgrade-173210
	I0717 22:28:57.070366  386657 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 22:28:57.072118  386657 out.go:177] * Pulling base image ...
	I0717 22:28:57.073701  386657 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0717 22:28:57.073796  386657 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 22:28:57.115239  386657 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 22:28:57.115285  386657 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0717 22:28:57.359951  386657 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 22:28:57.360159  386657 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/stopped-upgrade-173210/config.json ...
	I0717 22:28:57.360458  386657 cache.go:195] Successfully downloaded all kic artifacts
	I0717 22:28:57.360503  386657 start.go:365] acquiring machines lock for stopped-upgrade-173210: {Name:mk915a35b771f4f5b36e2b5686c4729885f9b5e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.360608  386657 start.go:369] acquired machines lock for "stopped-upgrade-173210" in 69.508µs
	I0717 22:28:57.360625  386657 start.go:96] Skipping create...Using existing machine configuration
	I0717 22:28:57.360631  386657 fix.go:54] fixHost starting: m01
	I0717 22:28:57.360890  386657 cli_runner.go:164] Run: docker container inspect stopped-upgrade-173210 --format={{.State.Status}}
	I0717 22:28:57.361129  386657 cache.go:107] acquiring lock: {Name:mk54e82cc18ef6777061ef66b803fb7d59d19274 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.361183  386657 cache.go:107] acquiring lock: {Name:mk6e6299b54ac5005fb9824ce5f3019cd16eea3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.361217  386657 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0717 22:28:57.361228  386657 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 115.192µs
	I0717 22:28:57.361240  386657 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0717 22:28:57.361253  386657 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0717 22:28:57.361253  386657 cache.go:107] acquiring lock: {Name:mkc66cb3bef8695226a3efb17dddd1fe2af439fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.361269  386657 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 139.417µs
	I0717 22:28:57.361279  386657 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0717 22:28:57.361295  386657 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0717 22:28:57.361289  386657 cache.go:107] acquiring lock: {Name:mkd48c52dda5f619f1d607192c9e10d5385a8483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.361301  386657 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 49.98µs
	I0717 22:28:57.361310  386657 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0717 22:28:57.361321  386657 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0717 22:28:57.361327  386657 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 40.86µs
	I0717 22:28:57.361324  386657 cache.go:107] acquiring lock: {Name:mk63d343a754d6d789dbda598290c3ac15744223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.361334  386657 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0717 22:28:57.361355  386657 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 22:28:57.361350  386657 cache.go:107] acquiring lock: {Name:mkb75ba42f9cdb24da4e9c92034c1de85d4957e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.361361  386657 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 39.193µs
	I0717 22:28:57.361370  386657 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 22:28:57.361382  386657 cache.go:107] acquiring lock: {Name:mk793b40a5c2fff98a889b50e408469b732cf2c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.361391  386657 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0717 22:28:57.361415  386657 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0717 22:28:57.361421  386657 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 40.68µs
	I0717 22:28:57.361429  386657 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0717 22:28:57.361419  386657 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 67.379µs
	I0717 22:28:57.361445  386657 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0717 22:28:57.361444  386657 cache.go:107] acquiring lock: {Name:mkca6f29a1b606796b9db67ee9b8bd55cd7c498b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 22:28:57.361482  386657 cache.go:115] /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 22:28:57.361489  386657 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 399.657µs
	I0717 22:28:57.361498  386657 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 22:28:57.361503  386657 cache.go:87] Successfully saved all images to host disk.
	I0717 22:28:57.386800  386657 fix.go:102] recreateIfNeeded on stopped-upgrade-173210: state=Stopped err=<nil>
	W0717 22:28:57.386836  386657 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 22:28:57.390035  386657 out.go:177] * Restarting existing docker container for "stopped-upgrade-173210" ...
	I0717 22:28:57.391652  386657 cli_runner.go:164] Run: docker start stopped-upgrade-173210
	I0717 22:28:57.708821  386657 cli_runner.go:164] Run: docker container inspect stopped-upgrade-173210 --format={{.State.Status}}
	I0717 22:28:57.727051  386657 kic.go:426] container "stopped-upgrade-173210" state is running.
	I0717 22:28:57.727463  386657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-173210
	I0717 22:28:57.747748  386657 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/stopped-upgrade-173210/config.json ...
	I0717 22:28:57.748020  386657 machine.go:88] provisioning docker machine ...
	I0717 22:28:57.748061  386657 ubuntu.go:169] provisioning hostname "stopped-upgrade-173210"
	I0717 22:28:57.748126  386657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-173210
	I0717 22:28:57.772818  386657 main.go:141] libmachine: Using SSH client type: native
	I0717 22:28:57.773481  386657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0717 22:28:57.773506  386657 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-173210 && echo "stopped-upgrade-173210" | sudo tee /etc/hostname
	I0717 22:28:57.774164  386657 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51330->127.0.0.1:32949: read: connection reset by peer
	I0717 22:29:00.900320  386657 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-173210
	
	I0717 22:29:00.900397  386657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-173210
	I0717 22:29:00.919578  386657 main.go:141] libmachine: Using SSH client type: native
	I0717 22:29:00.920152  386657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0717 22:29:00.920183  386657 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-173210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-173210/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-173210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 22:29:01.036472  386657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 22:29:01.036505  386657 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16899-218877/.minikube CaCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16899-218877/.minikube}
	I0717 22:29:01.036548  386657 ubuntu.go:177] setting up certificates
	I0717 22:29:01.036564  386657 provision.go:83] configureAuth start
	I0717 22:29:01.036633  386657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-173210
	I0717 22:29:01.057552  386657 provision.go:138] copyHostCerts
	I0717 22:29:01.057641  386657 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem, removing ...
	I0717 22:29:01.057660  386657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem
	I0717 22:29:01.057755  386657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/key.pem (1679 bytes)
	I0717 22:29:01.057898  386657 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem, removing ...
	I0717 22:29:01.057914  386657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem
	I0717 22:29:01.057959  386657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/ca.pem (1078 bytes)
	I0717 22:29:01.058055  386657 exec_runner.go:144] found /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem, removing ...
	I0717 22:29:01.058069  386657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem
	I0717 22:29:01.058109  386657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16899-218877/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16899-218877/.minikube/cert.pem (1123 bytes)
	I0717 22:29:01.058202  386657 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-173210 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-173210]
	I0717 22:29:01.254687  386657 provision.go:172] copyRemoteCerts
	I0717 22:29:01.254781  386657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 22:29:01.254840  386657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-173210
	I0717 22:29:01.276483  386657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/stopped-upgrade-173210/id_rsa Username:docker}
	I0717 22:29:01.367951  386657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 22:29:01.386331  386657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 22:29:01.404227  386657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 22:29:01.423776  386657 provision.go:86] duration metric: configureAuth took 387.194892ms
	I0717 22:29:01.423809  386657 ubuntu.go:193] setting minikube options for container-runtime
	I0717 22:29:01.423987  386657 config.go:182] Loaded profile config "stopped-upgrade-173210": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 22:29:01.424087  386657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-173210
	I0717 22:29:01.443126  386657 main.go:141] libmachine: Using SSH client type: native
	I0717 22:29:01.443566  386657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0717 22:29:01.443585  386657 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 22:29:04.575662  386657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 22:29:04.575696  386657 machine.go:91] provisioned docker machine in 6.827655454s
	I0717 22:29:04.575710  386657 start.go:300] post-start starting for "stopped-upgrade-173210" (driver="docker")
	I0717 22:29:04.575723  386657 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 22:29:04.575786  386657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 22:29:04.575839  386657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-173210
	I0717 22:29:04.595782  386657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/stopped-upgrade-173210/id_rsa Username:docker}
	I0717 22:29:04.708277  386657 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 22:29:04.711827  386657 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 22:29:04.711849  386657 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 22:29:04.711857  386657 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 22:29:04.711863  386657 info.go:137] Remote host: Ubuntu 19.10
	I0717 22:29:04.711872  386657 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/addons for local assets ...
	I0717 22:29:04.711919  386657 filesync.go:126] Scanning /home/jenkins/minikube-integration/16899-218877/.minikube/files for local assets ...
	I0717 22:29:04.711983  386657 filesync.go:149] local asset: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem -> 2256422.pem in /etc/ssl/certs
	I0717 22:29:04.712063  386657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 22:29:04.720040  386657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/ssl/certs/2256422.pem --> /etc/ssl/certs/2256422.pem (1708 bytes)
	I0717 22:29:04.742250  386657 start.go:303] post-start completed in 166.522194ms
	I0717 22:29:04.742324  386657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:29:04.742366  386657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-173210
	I0717 22:29:04.763072  386657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/stopped-upgrade-173210/id_rsa Username:docker}
	I0717 22:29:04.848152  386657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 22:29:04.853075  386657 fix.go:56] fixHost completed within 7.492434801s
	I0717 22:29:04.853112  386657 start.go:83] releasing machines lock for "stopped-upgrade-173210", held for 7.492488153s
	I0717 22:29:04.853182  386657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-173210
	I0717 22:29:04.880461  386657 ssh_runner.go:195] Run: cat /version.json
	I0717 22:29:04.880524  386657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-173210
	I0717 22:29:04.880864  386657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 22:29:04.880936  386657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-173210
	I0717 22:29:04.907731  386657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/stopped-upgrade-173210/id_rsa Username:docker}
	I0717 22:29:04.916713  386657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/stopped-upgrade-173210/id_rsa Username:docker}
	W0717 22:29:04.991591  386657 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 22:29:04.991683  386657 ssh_runner.go:195] Run: systemctl --version
	I0717 22:29:05.049757  386657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 22:29:05.100214  386657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 22:29:05.105396  386657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:29:05.128176  386657 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 22:29:05.128265  386657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 22:29:05.178376  386657 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 22:29:05.178401  386657 start.go:466] detecting cgroup driver to use...
	I0717 22:29:05.178439  386657 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 22:29:05.178485  386657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 22:29:05.214005  386657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 22:29:05.231893  386657 docker.go:196] disabling cri-docker service (if available) ...
	I0717 22:29:05.231968  386657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 22:29:05.249012  386657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 22:29:05.263734  386657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 22:29:05.278189  386657 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 22:29:05.278246  386657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 22:29:05.374142  386657 docker.go:212] disabling docker service ...
	I0717 22:29:05.374206  386657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 22:29:05.388147  386657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 22:29:05.400781  386657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 22:29:05.501311  386657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 22:29:05.597591  386657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 22:29:05.610710  386657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 22:29:05.628064  386657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 22:29:05.628162  386657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 22:29:05.640747  386657 out.go:177] 
	W0717 22:29:05.642380  386657 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 22:29:05.642403  386657 out.go:239] * 
	* 
	W0717 22:29:05.643501  386657 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 22:29:05.645167  386657 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-173210 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (122.39s)

                                                
                                    

Test pass (274/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.75
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.3/json-events 21.63
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.2
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
18 TestDownloadOnlyKic 1.21
19 TestBinaryMirror 0.75
20 TestOffline 52.41
22 TestAddons/Setup 133.09
24 TestAddons/parallel/Registry 14.62
26 TestAddons/parallel/InspektorGadget 10.95
27 TestAddons/parallel/MetricsServer 5.65
28 TestAddons/parallel/HelmTiller 18.97
30 TestAddons/parallel/CSI 49.97
31 TestAddons/parallel/Headlamp 14.12
32 TestAddons/parallel/CloudSpanner 5.93
35 TestAddons/serial/GCPAuth/Namespaces 0.12
36 TestAddons/StoppedEnableDisable 12.13
37 TestCertOptions 26.85
38 TestCertExpiration 246.87
40 TestForceSystemdFlag 32.97
41 TestForceSystemdEnv 29.78
43 TestKVMDriverInstallOrUpdate 7.42
47 TestErrorSpam/setup 21.16
48 TestErrorSpam/start 0.59
49 TestErrorSpam/status 0.87
50 TestErrorSpam/pause 1.45
51 TestErrorSpam/unpause 1.47
52 TestErrorSpam/stop 1.36
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 37.81
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 39.08
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.73
64 TestFunctional/serial/CacheCmd/cache/add_local 2.03
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.11
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 31.75
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.36
75 TestFunctional/serial/LogsFileCmd 1.37
76 TestFunctional/serial/InvalidService 3.87
78 TestFunctional/parallel/ConfigCmd 0.33
79 TestFunctional/parallel/DashboardCmd 29.76
80 TestFunctional/parallel/DryRun 0.4
81 TestFunctional/parallel/InternationalLanguage 0.19
82 TestFunctional/parallel/StatusCmd 1.02
86 TestFunctional/parallel/ServiceCmdConnect 8.54
87 TestFunctional/parallel/AddonsCmd 0.15
88 TestFunctional/parallel/PersistentVolumeClaim 42.45
90 TestFunctional/parallel/SSHCmd 0.55
91 TestFunctional/parallel/CpCmd 1.15
92 TestFunctional/parallel/MySQL 21.6
93 TestFunctional/parallel/FileSync 0.31
94 TestFunctional/parallel/CertSync 1.59
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
103 TestFunctional/parallel/Version/short 0.06
104 TestFunctional/parallel/Version/components 1.1
105 TestFunctional/parallel/ImageCommands/ImageListShort 1.15
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
109 TestFunctional/parallel/ImageCommands/ImageBuild 5.34
110 TestFunctional/parallel/ImageCommands/Setup 2.04
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.47
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.84
118 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.95
119 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.67
120 TestFunctional/parallel/ServiceCmd/List 0.55
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.62
123 TestFunctional/parallel/ServiceCmd/Format 0.4
124 TestFunctional/parallel/ServiceCmd/URL 0.44
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.92
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
134 TestFunctional/parallel/ProfileCmd/profile_list 0.34
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.27
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
137 TestFunctional/parallel/MountCmd/any-port 7.62
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.3
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
142 TestFunctional/parallel/MountCmd/specific-port 2.01
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
144 TestFunctional/delete_addon-resizer_images 0.08
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.01
150 TestIngressAddonLegacy/StartLegacyK8sCluster 87.48
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.79
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.54
157 TestJSONOutput/start/Command 37.91
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.65
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.6
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.75
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.19
182 TestKicCustomNetwork/create_custom_network 38.43
183 TestKicCustomNetwork/use_default_bridge_network 24.99
184 TestKicExistingNetwork 27
185 TestKicCustomSubnet 28.9
186 TestKicStaticIP 27.68
187 TestMainNoArgs 0.04
188 TestMinikubeProfile 47.09
191 TestMountStart/serial/StartWithMountFirst 8.2
192 TestMountStart/serial/VerifyMountFirst 0.24
193 TestMountStart/serial/StartWithMountSecond 5.3
194 TestMountStart/serial/VerifyMountSecond 0.24
195 TestMountStart/serial/DeleteFirst 1.64
196 TestMountStart/serial/VerifyMountPostDelete 0.24
197 TestMountStart/serial/Stop 1.19
198 TestMountStart/serial/RestartStopped 7.1
199 TestMountStart/serial/VerifyMountPostStop 0.25
202 TestMultiNode/serial/FreshStart2Nodes 53.87
203 TestMultiNode/serial/DeployApp2Nodes 4.78
205 TestMultiNode/serial/AddNode 18.46
206 TestMultiNode/serial/ProfileList 0.28
207 TestMultiNode/serial/CopyFile 8.74
208 TestMultiNode/serial/StopNode 2.11
209 TestMultiNode/serial/StartAfterStop 10.67
210 TestMultiNode/serial/RestartKeepsNodes 111.96
211 TestMultiNode/serial/DeleteNode 4.66
212 TestMultiNode/serial/StopMultiNode 23.84
213 TestMultiNode/serial/RestartMultiNode 77.71
214 TestMultiNode/serial/ValidateNameConflict 24.34
219 TestPreload 141.09
221 TestScheduledStopUnix 101.39
224 TestInsufficientStorage 12.72
227 TestKubernetesUpgrade 366.33
228 TestMissingContainerUpgrade 142.34
229 TestStoppedBinaryUpgrade/Setup 1.93
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
232 TestNoKubernetes/serial/StartWithK8s 32.13
234 TestNoKubernetes/serial/StartWithStopK8s 5.66
235 TestNoKubernetes/serial/Start 10.95
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
237 TestNoKubernetes/serial/ProfileList 1.59
238 TestNoKubernetes/serial/Stop 1.37
239 TestNoKubernetes/serial/StartNoArgs 8
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
241 TestStoppedBinaryUpgrade/MinikubeLogs 0.64
249 TestNetworkPlugins/group/false 3.05
261 TestPause/serial/Start 42.35
262 TestNetworkPlugins/group/auto/Start 41.2
263 TestPause/serial/SecondStartNoReconfiguration 39.99
264 TestNetworkPlugins/group/auto/KubeletFlags 0.25
265 TestNetworkPlugins/group/auto/NetCatPod 10.31
266 TestNetworkPlugins/group/auto/DNS 0.17
267 TestNetworkPlugins/group/auto/Localhost 0.16
268 TestNetworkPlugins/group/auto/HairPin 0.14
269 TestPause/serial/Pause 0.85
270 TestNetworkPlugins/group/kindnet/Start 41.82
271 TestPause/serial/VerifyStatus 0.41
272 TestPause/serial/Unpause 0.79
273 TestPause/serial/PauseAgain 0.8
274 TestPause/serial/DeletePaused 4.1
275 TestPause/serial/VerifyDeletedResources 0.54
276 TestNetworkPlugins/group/calico/Start 62.44
277 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
278 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
279 TestNetworkPlugins/group/kindnet/NetCatPod 10.43
280 TestNetworkPlugins/group/kindnet/DNS 0.17
281 TestNetworkPlugins/group/kindnet/Localhost 0.13
282 TestNetworkPlugins/group/kindnet/HairPin 0.13
283 TestNetworkPlugins/group/calico/ControllerPod 5.02
284 TestNetworkPlugins/group/calico/KubeletFlags 0.28
285 TestNetworkPlugins/group/calico/NetCatPod 12.25
286 TestNetworkPlugins/group/custom-flannel/Start 65.17
287 TestNetworkPlugins/group/calico/DNS 0.18
288 TestNetworkPlugins/group/calico/Localhost 0.15
289 TestNetworkPlugins/group/calico/HairPin 0.15
290 TestNetworkPlugins/group/enable-default-cni/Start 42.23
291 TestNetworkPlugins/group/flannel/Start 61.59
292 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
293 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
294 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
295 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.41
296 TestNetworkPlugins/group/bridge/Start 42.74
297 TestNetworkPlugins/group/custom-flannel/DNS 0.17
298 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
299 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
300 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
301 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
304 TestStartStop/group/old-k8s-version/serial/FirstStart 137.9
305 TestNetworkPlugins/group/flannel/ControllerPod 5.02
306 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
307 TestNetworkPlugins/group/flannel/NetCatPod 10.39
309 TestStartStop/group/no-preload/serial/FirstStart 65.72
310 TestNetworkPlugins/group/flannel/DNS 0.19
311 TestNetworkPlugins/group/flannel/Localhost 0.21
312 TestNetworkPlugins/group/flannel/HairPin 0.17
313 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
314 TestNetworkPlugins/group/bridge/NetCatPod 9.37
315 TestNetworkPlugins/group/bridge/DNS 0.22
316 TestNetworkPlugins/group/bridge/Localhost 0.19
317 TestNetworkPlugins/group/bridge/HairPin 0.18
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.99
321 TestStartStop/group/newest-cni/serial/FirstStart 37.91
322 TestStartStop/group/no-preload/serial/DeployApp 10.42
323 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
324 TestStartStop/group/no-preload/serial/Stop 11.95
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.46
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.9
328 TestStartStop/group/newest-cni/serial/Stop 1.19
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
330 TestStartStop/group/newest-cni/serial/SecondStart 27.03
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
333 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
334 TestStartStop/group/no-preload/serial/SecondStart 334.84
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 335.03
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
340 TestStartStop/group/newest-cni/serial/Pause 2.81
342 TestStartStop/group/embed-certs/serial/FirstStart 41.1
343 TestStartStop/group/old-k8s-version/serial/DeployApp 10.43
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.8
345 TestStartStop/group/old-k8s-version/serial/Stop 12.03
346 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
347 TestStartStop/group/old-k8s-version/serial/SecondStart 432.33
348 TestStartStop/group/embed-certs/serial/DeployApp 10.39
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
350 TestStartStop/group/embed-certs/serial/Stop 12.1
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
352 TestStartStop/group/embed-certs/serial/SecondStart 586.59
353 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.02
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.02
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
356 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
357 TestStartStop/group/no-preload/serial/Pause 2.92
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.65
361 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
362 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
363 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
364 TestStartStop/group/old-k8s-version/serial/Pause 2.63
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
366 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
368 TestStartStop/group/embed-certs/serial/Pause 2.68
x
+
TestDownloadOnly/v1.16.0/json-events (24.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-151003 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-151003 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (24.752212148s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (24.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-151003
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-151003: exit status 85 (62.630868ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-151003 | jenkins | v1.31.0 | 17 Jul 23 21:57 UTC |          |
	|         | -p download-only-151003        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:57:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:57:34.974615  225653 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:57:34.974807  225653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:57:34.974816  225653 out.go:309] Setting ErrFile to fd 2...
	I0717 21:57:34.974820  225653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:57:34.975010  225653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	W0717 21:57:34.975122  225653 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-218877/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-218877/.minikube/config/config.json: no such file or directory
	I0717 21:57:34.975709  225653 out.go:303] Setting JSON to true
	I0717 21:57:34.976731  225653 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5999,"bootTime":1689625056,"procs":385,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:57:34.976790  225653 start.go:138] virtualization: kvm guest
	I0717 21:57:34.979403  225653 out.go:97] [download-only-151003] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:57:34.981015  225653 out.go:169] MINIKUBE_LOCATION=16899
	W0717 21:57:34.979564  225653 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 21:57:34.979627  225653 notify.go:220] Checking for updates...
	I0717 21:57:34.984064  225653 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:57:34.985474  225653 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 21:57:34.986866  225653 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 21:57:34.988347  225653 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 21:57:34.991221  225653 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:57:34.991535  225653 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 21:57:35.017340  225653 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:57:35.017437  225653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:57:35.077634  225653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 21:57:35.068324511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:57:35.077751  225653 docker.go:294] overlay module found
	I0717 21:57:35.079735  225653 out.go:97] Using the docker driver based on user configuration
	I0717 21:57:35.079765  225653 start.go:298] selected driver: docker
	I0717 21:57:35.079771  225653 start.go:880] validating driver "docker" against <nil>
	I0717 21:57:35.079859  225653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:57:35.134507  225653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 21:57:35.125857345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:57:35.134696  225653 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 21:57:35.135184  225653 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0717 21:57:35.135354  225653 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 21:57:35.137408  225653 out.go:169] Using Docker driver with root privileges
	I0717 21:57:35.138913  225653 cni.go:84] Creating CNI manager for ""
	I0717 21:57:35.138937  225653 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:57:35.138950  225653 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 21:57:35.138959  225653 start_flags.go:319] config:
	{Name:download-only-151003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-151003 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:57:35.140773  225653 out.go:97] Starting control plane node download-only-151003 in cluster download-only-151003
	I0717 21:57:35.140799  225653 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:57:35.142248  225653 out.go:97] Pulling base image ...
	I0717 21:57:35.142281  225653 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 21:57:35.142339  225653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:57:35.159841  225653 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:57:35.160001  225653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 21:57:35.160075  225653 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:57:35.457760  225653 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0717 21:57:35.457801  225653 cache.go:57] Caching tarball of preloaded images
	I0717 21:57:35.457956  225653 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0717 21:57:35.460163  225653 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 21:57:35.460195  225653 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 21:57:35.559942  225653 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0717 21:57:47.628431  225653 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-151003"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (21.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-151003 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-151003 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (21.631660896s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (21.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-151003
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-151003: exit status 85 (62.560298ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-151003 | jenkins | v1.31.0 | 17 Jul 23 21:57 UTC |          |
	|         | -p download-only-151003        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-151003 | jenkins | v1.31.0 | 17 Jul 23 21:57 UTC |          |
	|         | -p download-only-151003        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 21:57:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 21:57:59.789023  225841 out.go:296] Setting OutFile to fd 1 ...
	I0717 21:57:59.789122  225841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:57:59.789129  225841 out.go:309] Setting ErrFile to fd 2...
	I0717 21:57:59.789134  225841 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 21:57:59.789333  225841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	W0717 21:57:59.789463  225841 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16899-218877/.minikube/config/config.json: open /home/jenkins/minikube-integration/16899-218877/.minikube/config/config.json: no such file or directory
	I0717 21:57:59.789882  225841 out.go:303] Setting JSON to true
	I0717 21:57:59.790969  225841 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6024,"bootTime":1689625056,"procs":381,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 21:57:59.791033  225841 start.go:138] virtualization: kvm guest
	I0717 21:57:59.793819  225841 out.go:97] [download-only-151003] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 21:57:59.795571  225841 out.go:169] MINIKUBE_LOCATION=16899
	I0717 21:57:59.793990  225841 notify.go:220] Checking for updates...
	I0717 21:57:59.798911  225841 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 21:57:59.800861  225841 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 21:57:59.802685  225841 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 21:57:59.804321  225841 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 21:57:59.807190  225841 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 21:57:59.807617  225841 config.go:182] Loaded profile config "download-only-151003": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0717 21:57:59.807742  225841 start.go:788] api.Load failed for download-only-151003: filestore "download-only-151003": Docker machine "download-only-151003" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 21:57:59.807884  225841 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 21:57:59.807928  225841 start.go:788] api.Load failed for download-only-151003: filestore "download-only-151003": Docker machine "download-only-151003" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 21:57:59.829627  225841 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 21:57:59.829743  225841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:57:59.892959  225841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 21:57:59.883298681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:57:59.893053  225841 docker.go:294] overlay module found
	I0717 21:57:59.895049  225841 out.go:97] Using the docker driver based on existing profile
	I0717 21:57:59.895069  225841 start.go:298] selected driver: docker
	I0717 21:57:59.895074  225841 start.go:880] validating driver "docker" against &{Name:download-only-151003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-151003 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0717 21:57:59.895248  225841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 21:57:59.954998  225841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:40 SystemTime:2023-07-17 21:57:59.946070642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 21:57:59.955644  225841 cni.go:84] Creating CNI manager for ""
	I0717 21:57:59.955661  225841 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 21:57:59.955671  225841 start_flags.go:319] config:
	{Name:download-only-151003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-151003 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 21:57:59.957935  225841 out.go:97] Starting control plane node download-only-151003 in cluster download-only-151003
	I0717 21:57:59.957954  225841 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 21:57:59.959617  225841 out.go:97] Pulling base image ...
	I0717 21:57:59.959647  225841 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:57:59.959718  225841 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 21:57:59.977301  225841 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 21:57:59.977439  225841 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 21:57:59.977455  225841 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 21:57:59.977463  225841 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 21:57:59.977470  225841 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 21:58:00.190934  225841 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 21:58:00.190975  225841 cache.go:57] Caching tarball of preloaded images
	I0717 21:58:00.191246  225841 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:58:00.193477  225841 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 21:58:00.193506  225841 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0717 21:58:00.294178  225841 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:36a3ccedce25b36b9ffc5201ce124dec -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 21:58:17.641151  225841 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0717 21:58:17.641250  225841 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16899-218877/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0717 21:58:18.567244  225841 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 21:58:18.567442  225841 profile.go:148] Saving config to /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/download-only-151003/config.json ...
	I0717 21:58:18.567657  225841 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 21:58:18.567875  225841 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16899-218877/.minikube/cache/linux/amd64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-151003"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-151003
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.21s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-750030 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-750030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-750030
--- PASS: TestDownloadOnlyKic (1.21s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-596153 --alsologtostderr --binary-mirror http://127.0.0.1:35365 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-596153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-596153
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (52.41s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-116450 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-116450 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (49.918794446s)
helpers_test.go:175: Cleaning up "offline-crio-116450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-116450
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-116450: (2.492871749s)
--- PASS: TestOffline (52.41s)

                                                
                                    
x
+
TestAddons/Setup (133.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-759450 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-759450 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m13.085017223s)
--- PASS: TestAddons/Setup (133.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 11.423903ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-k4mwx" [d04a15e8-945d-4017-9b42-4202fc1327d9] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008355296s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pchzf" [7055382d-1773-4ccc-bf7e-d773091690c4] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009860415s
addons_test.go:316: (dbg) Run:  kubectl --context addons-759450 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-759450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-759450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.828423366s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 ip
2023/07/17 22:00:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nv4x5" [ef17aee7-f87c-4586-8ddb-42cda5fc48f2] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010654615s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-759450
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-759450: (5.933558547s)
--- PASS: TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.748853ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-gtlnf" [bc43e1d5-26a9-42fc-a9aa-ad13b9d7d6a7] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009169525s
addons_test.go:391: (dbg) Run:  kubectl --context addons-759450 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.97s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.216892ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-4vd8n" [1b041e2f-332b-4ca4-bfec-6945f90ce8a2] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008602063s
addons_test.go:449: (dbg) Run:  kubectl --context addons-759450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-759450 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (13.420328577s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (18.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 12.13727ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-759450 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-759450 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3541122b-434b-44ce-972b-eddcdc1bc4f1] Pending
helpers_test.go:344: "task-pv-pod" [3541122b-434b-44ce-972b-eddcdc1bc4f1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3541122b-434b-44ce-972b-eddcdc1bc4f1] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.007455988s
addons_test.go:560: (dbg) Run:  kubectl --context addons-759450 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-759450 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-759450 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-759450 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-759450 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-759450 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-759450 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-759450 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bb48f599-f581-41ec-b915-8e4a6b9c337e] Pending
helpers_test.go:344: "task-pv-pod-restore" [bb48f599-f581-41ec-b915-8e4a6b9c337e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bb48f599-f581-41ec-b915-8e4a6b9c337e] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.006891342s
addons_test.go:602: (dbg) Run:  kubectl --context addons-759450 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-759450 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-759450 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-759450 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.592613228s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-759450 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-759450 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-759450 --alsologtostderr -v=1: (1.047954713s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-tlv4k" [8e76bbf2-457f-429a-aa07-a5b34a5f4853] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-tlv4k" [8e76bbf2-457f-429a-aa07-a5b34a5f4853] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.06756745s
--- PASS: TestAddons/parallel/Headlamp (14.12s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-26c4c" [a3041b25-d9fd-4c0e-bb1e-33c8bcadcffc] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00688971s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-759450
--- PASS: TestAddons/parallel/CloudSpanner (5.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-759450 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-759450 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-759450
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-759450: (11.896484283s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-759450
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-759450
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-759450
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (26.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-168976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-168976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.391303724s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-168976 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-168976 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-168976 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-168976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-168976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-168976: (1.887864125s)
--- PASS: TestCertOptions (26.85s)

                                                
                                    
x
+
TestCertExpiration (246.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-600072 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-600072 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.613278378s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-600072 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-600072 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (36.268368854s)
helpers_test.go:175: Cleaning up "cert-expiration-600072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-600072
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-600072: (3.984892937s)
--- PASS: TestCertExpiration (246.87s)

                                                
                                    
x
+
TestForceSystemdFlag (32.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-405124 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-405124 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.949970094s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-405124 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-405124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-405124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-405124: (3.736814498s)
--- PASS: TestForceSystemdFlag (32.97s)

                                                
                                    
x
+
TestForceSystemdEnv (29.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-452980 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-452980 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.94093402s)
helpers_test.go:175: Cleaning up "force-systemd-env-452980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-452980
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-452980: (4.834721086s)
--- PASS: TestForceSystemdEnv (29.78s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.42s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0717 22:29:21.622746  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (7.42s)

                                                
                                    
x
+
TestErrorSpam/setup (21.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-506216 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-506216 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-506216 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-506216 --driver=docker  --container-runtime=crio: (21.157346136s)
--- PASS: TestErrorSpam/setup (21.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 stop: (1.187346694s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-506216 --log_dir /tmp/nospam-506216 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16899-218877/.minikube/files/etc/test/nested/copy/225642/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994983 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-994983 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.809034761s)
--- PASS: TestFunctional/serial/StartWithProxy (37.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994983 --alsologtostderr -v=8
E0717 22:05:36.955649  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:36.961250  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:36.971543  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:36.992130  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:37.032466  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:37.112809  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:37.273229  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:37.594101  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:38.235035  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:39.515638  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:42.076765  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:05:47.197152  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-994983 --alsologtostderr -v=8: (39.080219199s)
functional_test.go:659: soft start took 39.080847947s for "functional-994983" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-994983 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 cache add registry.k8s.io/pause:3.1: (1.270317983s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 cache add registry.k8s.io/pause:3.3
E0717 22:05:57.437375  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 cache add registry.k8s.io/pause:3.3: (1.262940894s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 cache add registry.k8s.io/pause:latest: (1.19277026s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-994983 /tmp/TestFunctionalserialCacheCmdcacheadd_local2699949429/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 cache add minikube-local-cache-test:functional-994983
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 cache add minikube-local-cache-test:functional-994983: (1.714568032s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 cache delete minikube-local-cache-test:functional-994983
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-994983
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (256.36156ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 cache reload: (1.065526799s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 kubectl -- --context functional-994983 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-994983 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994983 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 22:06:17.917992  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-994983 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.745331229s)
functional_test.go:757: restart took 31.745457171s for "functional-994983" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-994983 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 logs: (1.360157132s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 logs --file /tmp/TestFunctionalserialLogsFileCmd260373221/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 logs --file /tmp/TestFunctionalserialLogsFileCmd260373221/001/logs.txt: (1.367419013s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-994983 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-994983
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-994983: exit status 115 (316.938083ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30656 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-994983 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 config get cpus: exit status 14 (54.383932ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 config get cpus: exit status 14 (58.377697ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-994983 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-994983 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 261023: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994983 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-994983 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (171.072788ms)

                                                
                                                
-- stdout --
	* [functional-994983] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:06:58.254506  258624 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:06:58.254923  258624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:06:58.254957  258624 out.go:309] Setting ErrFile to fd 2...
	I0717 22:06:58.254969  258624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:06:58.255481  258624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:06:58.256795  258624 out.go:303] Setting JSON to false
	I0717 22:06:58.257843  258624 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6562,"bootTime":1689625056,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:06:58.257924  258624 start.go:138] virtualization: kvm guest
	I0717 22:06:58.260890  258624 out.go:177] * [functional-994983] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:06:58.263376  258624 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:06:58.263405  258624 notify.go:220] Checking for updates...
	I0717 22:06:58.265006  258624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:06:58.266623  258624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:06:58.268154  258624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 22:06:58.269538  258624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:06:58.270990  258624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:06:58.272900  258624 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:06:58.273472  258624 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:06:58.302527  258624 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:06:58.302622  258624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:06:58.361361  258624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-17 22:06:58.352447907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:06:58.361494  258624 docker.go:294] overlay module found
	I0717 22:06:58.363740  258624 out.go:177] * Using the docker driver based on existing profile
	I0717 22:06:58.365723  258624 start.go:298] selected driver: docker
	I0717 22:06:58.365756  258624 start.go:880] validating driver "docker" against &{Name:functional-994983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-994983 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:06:58.366031  258624 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:06:58.369133  258624 out.go:177] 
	W0717 22:06:58.370634  258624 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 22:06:58.372084  258624 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994983 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-994983 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-994983 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.767675ms)

                                                
                                                
-- stdout --
	* [functional-994983] minikube v1.31.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:06:58.652913  258808 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:06:58.655239  258808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:06:58.655299  258808 out.go:309] Setting ErrFile to fd 2...
	I0717 22:06:58.655317  258808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:06:58.655973  258808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:06:58.657151  258808 out.go:303] Setting JSON to false
	I0717 22:06:58.658327  258808 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6563,"bootTime":1689625056,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:06:58.658433  258808 start.go:138] virtualization: kvm guest
	I0717 22:06:58.660886  258808 out.go:177] * [functional-994983] minikube v1.31.0 sur Ubuntu 20.04 (kvm/amd64)
	I0717 22:06:58.663001  258808 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:06:58.662955  258808 notify.go:220] Checking for updates...
	I0717 22:06:58.664633  258808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:06:58.666348  258808 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:06:58.667943  258808 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 22:06:58.669534  258808 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:06:58.670964  258808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:06:58.672894  258808 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:06:58.673524  258808 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:06:58.707375  258808 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:06:58.707518  258808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:06:58.780652  258808 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-17 22:06:58.768860191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:06:58.781074  258808 docker.go:294] overlay module found
	I0717 22:06:58.783207  258808 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 22:06:58.784799  258808 start.go:298] selected driver: docker
	I0717 22:06:58.784819  258808 start.go:880] validating driver "docker" against &{Name:functional-994983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-994983 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 22:06:58.784927  258808 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:06:58.787328  258808 out.go:177] 
	W0717 22:06:58.789075  258808 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 22:06:58.790851  258808 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-994983 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-994983 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-5hlkn" [8c366bf4-3f5a-4959-b3de-d89be401c7c9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-5hlkn" [8c366bf4-3f5a-4959-b3de-d89be401c7c9] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.019781737s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32219
functional_test.go:1674: http://192.168.49.2:32219: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-5hlkn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32219
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c47e417e-42b4-4f93-ae41-7b2423db381d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009311936s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-994983 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-994983 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-994983 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-994983 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ef6a77c7-61dd-45ec-acda-e6ff0bd70894] Pending
helpers_test.go:344: "sp-pod" [ef6a77c7-61dd-45ec-acda-e6ff0bd70894] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ef6a77c7-61dd-45ec-acda-e6ff0bd70894] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.011120285s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-994983 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-994983 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-994983 delete -f testdata/storage-provisioner/pod.yaml: (1.022596633s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-994983 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [15e4664f-ece4-459e-9aaf-7e593a52c0db] Pending
helpers_test.go:344: "sp-pod" [15e4664f-ece4-459e-9aaf-7e593a52c0db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [15e4664f-ece4-459e-9aaf-7e593a52c0db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.00904462s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-994983 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh -n functional-994983 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 cp functional-994983:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2456364735/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh -n functional-994983 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-994983 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-bwt6k" [c628aaf0-20b4-41e1-9419-3477dabc5021] Pending
helpers_test.go:344: "mysql-7db894d786-bwt6k" [c628aaf0-20b4-41e1-9419-3477dabc5021] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-bwt6k" [c628aaf0-20b4-41e1-9419-3477dabc5021] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.011333106s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-994983 exec mysql-7db894d786-bwt6k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-994983 exec mysql-7db894d786-bwt6k -- mysql -ppassword -e "show databases;": exit status 1 (185.952333ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-994983 exec mysql-7db894d786-bwt6k -- mysql -ppassword -e "show databases;"
2023/07/17 22:07:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (21.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/225642/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo cat /etc/test/nested/copy/225642/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/225642.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo cat /etc/ssl/certs/225642.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/225642.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo cat /usr/share/ca-certificates/225642.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2256422.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo cat /etc/ssl/certs/2256422.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2256422.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo cat /usr/share/ca-certificates/2256422.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-994983 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 ssh "sudo systemctl is-active docker": exit status 1 (297.670857ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 ssh "sudo systemctl is-active containerd": exit status 1 (285.543785ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 version -o=json --components: (1.104360524s)
--- PASS: TestFunctional/parallel/Version/components (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 image ls --format short --alsologtostderr: (1.153107148s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994983 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-994983
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994983 image ls --format short --alsologtostderr:
I0717 22:07:12.525980  262593 out.go:296] Setting OutFile to fd 1 ...
I0717 22:07:12.526101  262593 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:12.526110  262593 out.go:309] Setting ErrFile to fd 2...
I0717 22:07:12.526114  262593 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:12.526299  262593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
I0717 22:07:12.526861  262593 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:12.526953  262593 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:12.527310  262593 cli_runner.go:164] Run: docker container inspect functional-994983 --format={{.State.Status}}
I0717 22:07:12.543121  262593 ssh_runner.go:195] Run: systemctl --version
I0717 22:07:12.543167  262593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-994983
I0717 22:07:12.560006  262593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/functional-994983/id_rsa Username:docker}
I0717 22:07:12.664496  262593 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994983 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-scheduler          | v1.27.3            | 41697ceeb70b3 | 59.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | latest             | 021283c8eb95b | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-994983  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 08a0c939e61b7 | 122MB  |
| registry.k8s.io/kube-controller-manager | v1.27.3            | 7cffc01dba0e1 | 114MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| docker.io/library/nginx                 | alpine             | 4937520ae206c | 43.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-994983  | 5cc77655a9bcf | 1.47MB |
| registry.k8s.io/kube-proxy              | v1.27.3            | 5780543258cf0 | 72.7MB |
| docker.io/library/mysql                 | 5.7                | 2be84dd575ee2 | 588MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994983 image ls --format table --alsologtostderr:
I0717 22:07:19.557371  263468 out.go:296] Setting OutFile to fd 1 ...
I0717 22:07:19.557530  263468 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:19.557541  263468 out.go:309] Setting ErrFile to fd 2...
I0717 22:07:19.557548  263468 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:19.557756  263468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
I0717 22:07:19.558364  263468 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:19.558489  263468 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:19.558864  263468 cli_runner.go:164] Run: docker container inspect functional-994983 --format={{.State.Status}}
I0717 22:07:19.574758  263468 ssh_runner.go:195] Run: systemctl --version
I0717 22:07:19.574804  263468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-994983
I0717 22:07:19.590331  263468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/functional-994983/id_rsa Username:docker}
I0717 22:07:19.675834  263468 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994983 image ls --format json --alsologtostderr:
[{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"122065872"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"72713623"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":["docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1","docker.io/library/mysql@sha256
:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde"],"repoTags":["docker.io/library/mysql:5.7"],"size":"588268197"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e
52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5cc77655a9bcf49513f489747e693a8cdb2685ccab0cc89b2254ebc9ac9d0faa","repoDigests":["localhost/my-image@sha256:df87605d42b48cdb006a7840a09264243423ce3a5c74747fba2344dd2d8d25ec"],"repoTags":["localhost/my-image:functional-994983"],"size":"1468194"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e","registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43
cbc033b37846dccc2460c7bc06"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"113919286"},{"id":"818e4313830fb4bc47b3169278adcc0647f727460e4902be434541dd9e61e32d","repoDigests":["docker.io/library/51315fed7f6131662e4cb12725ec0896b3999c49f550d4bbe6d9d1774500b6b5-tmp@sha256:0952f373851daa7d1dc4f492b23940cb623f5ae21ae0ad7c277b5ed1fcef8861"],"repoTags":[],"size":"1465611"},{"id":"021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef","docker.io/library/nginx@sha256:1bb5c4b86cb7c1e9f0209611dc2135d8a2c1c3a6436163970c99193787d067ea"],"repoTags":["docker.io/library/nginx:latest"],"size":"191044865"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-994983"],"size
":"34114467"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"59811126"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326c
fe6ad5051937ba1132b8b7d4b0184e0d0a6","docker.io/library/nginx@sha256:2d4efe74ef541248b0a70838c557de04509d1115dec6bfc21ad0d66e41574a8a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"43220780"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994983 image ls --format json --alsologtostderr:
I0717 22:07:19.355862  263424 out.go:296] Setting OutFile to fd 1 ...
I0717 22:07:19.355988  263424 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:19.355998  263424 out.go:309] Setting ErrFile to fd 2...
I0717 22:07:19.356002  263424 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:19.356236  263424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
I0717 22:07:19.356832  263424 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:19.356948  263424 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:19.357348  263424 cli_runner.go:164] Run: docker container inspect functional-994983 --format={{.State.Status}}
I0717 22:07:19.373990  263424 ssh_runner.go:195] Run: systemctl --version
I0717 22:07:19.374038  263424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-994983
I0717 22:07:19.390281  263424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/functional-994983/id_rsa Username:docker}
I0717 22:07:19.479814  263424 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994983 image ls --format yaml --alsologtostderr:
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
- registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "113919286"
- id: 4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
- docker.io/library/nginx@sha256:2d4efe74ef541248b0a70838c557de04509d1115dec6bfc21ad0d66e41574a8a
repoTags:
- docker.io/library/nginx:alpine
size: "43220780"
- id: 021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
- docker.io/library/nginx@sha256:1bb5c4b86cb7c1e9f0209611dc2135d8a2c1c3a6436163970c99193787d067ea
repoTags:
- docker.io/library/nginx:latest
size: "191044865"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-994983
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "122065872"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "59811126"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "72713623"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994983 image ls --format yaml --alsologtostderr:
I0717 22:07:13.685679  262648 out.go:296] Setting OutFile to fd 1 ...
I0717 22:07:13.685816  262648 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:13.685822  262648 out.go:309] Setting ErrFile to fd 2...
I0717 22:07:13.685829  262648 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:13.686141  262648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
I0717 22:07:13.686925  262648 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:13.687113  262648 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:13.687749  262648 cli_runner.go:164] Run: docker container inspect functional-994983 --format={{.State.Status}}
I0717 22:07:13.707913  262648 ssh_runner.go:195] Run: systemctl --version
I0717 22:07:13.707993  262648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-994983
I0717 22:07:13.729480  262648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/functional-994983/id_rsa Username:docker}
I0717 22:07:13.863400  262648 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 ssh pgrep buildkitd: exit status 1 (275.301395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image build -t localhost/my-image:functional-994983 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 image build -t localhost/my-image:functional-994983 testdata/build --alsologtostderr: (4.857554295s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-994983 image build -t localhost/my-image:functional-994983 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 818e4313830
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-994983
--> 5cc77655a9b
Successfully tagged localhost/my-image:functional-994983
5cc77655a9bcf49513f489747e693a8cdb2685ccab0cc89b2254ebc9ac9d0faa
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-994983 image build -t localhost/my-image:functional-994983 testdata/build --alsologtostderr:
I0717 22:07:14.293034  262830 out.go:296] Setting OutFile to fd 1 ...
I0717 22:07:14.293218  262830 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:14.293232  262830 out.go:309] Setting ErrFile to fd 2...
I0717 22:07:14.293239  262830 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 22:07:14.293506  262830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
I0717 22:07:14.294100  262830 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:14.294667  262830 config.go:182] Loaded profile config "functional-994983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 22:07:14.295121  262830 cli_runner.go:164] Run: docker container inspect functional-994983 --format={{.State.Status}}
I0717 22:07:14.314476  262830 ssh_runner.go:195] Run: systemctl --version
I0717 22:07:14.314539  262830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-994983
I0717 22:07:14.331768  262830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/functional-994983/id_rsa Username:docker}
I0717 22:07:14.419597  262830 build_images.go:151] Building image from path: /tmp/build.3913370680.tar
I0717 22:07:14.419681  262830 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 22:07:14.428073  262830 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3913370680.tar
I0717 22:07:14.431299  262830 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3913370680.tar: stat -c "%s %y" /var/lib/minikube/build/build.3913370680.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3913370680.tar': No such file or directory
I0717 22:07:14.431336  262830 ssh_runner.go:362] scp /tmp/build.3913370680.tar --> /var/lib/minikube/build/build.3913370680.tar (3072 bytes)
I0717 22:07:14.466432  262830 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3913370680
I0717 22:07:14.475181  262830 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3913370680 -xf /var/lib/minikube/build/build.3913370680.tar
I0717 22:07:14.484663  262830 crio.go:297] Building image: /var/lib/minikube/build/build.3913370680
I0717 22:07:14.484726  262830 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-994983 /var/lib/minikube/build/build.3913370680 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 22:07:19.082502  262830 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-994983 /var/lib/minikube/build/build.3913370680 --cgroup-manager=cgroupfs: (4.597744767s)
I0717 22:07:19.082578  262830 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3913370680
I0717 22:07:19.091026  262830 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3913370680.tar
I0717 22:07:19.100574  262830 build_images.go:207] Built localhost/my-image:functional-994983 from /tmp/build.3913370680.tar
I0717 22:07:19.100604  262830 build_images.go:123] succeeded building to: functional-994983
I0717 22:07:19.100608  262830 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.02088757s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-994983
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-994983 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-994983 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-ngzx9" [6014942b-3683-4cee-bffe-9822eaddff89] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-ngzx9" [6014942b-3683-4cee-bffe-9822eaddff89] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.021475596s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-994983 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-994983 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-994983 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-994983 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 256509: os: process already finished
helpers_test.go:502: unable to terminate pid 256353: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-994983 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-994983 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6b184b45-e801-4270-ab58-14f95dd1bd13] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6b184b45-e801-4270-ab58-14f95dd1bd13] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.065578622s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image load --daemon gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 image load --daemon gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr: (3.63216306s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image load --daemon gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 image load --daemon gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr: (2.747112171s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.95826494s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-994983
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image load --daemon gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 image load --daemon gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr: (4.466533659s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 service list -o json
functional_test.go:1493: Took "462.290142ms" to run "out/minikube-linux-amd64 -p functional-994983 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30031
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30031
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-994983 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.160.228 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-994983 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image save gcr.io/google-containers/addon-resizer:functional-994983 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image rm gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0717 22:06:58.878743  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "292.018967ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "45.642369ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.069799944s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "327.349352ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "44.080491ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdany-port333450510/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689631619850578754" to /tmp/TestFunctionalparallelMountCmdany-port333450510/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689631619850578754" to /tmp/TestFunctionalparallelMountCmdany-port333450510/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689631619850578754" to /tmp/TestFunctionalparallelMountCmdany-port333450510/001/test-1689631619850578754
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.954892ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 22:06 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 22:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 22:06 test-1689631619850578754
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh cat /mount-9p/test-1689631619850578754
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-994983 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [de521a86-c11f-4762-a361-497600d2a72b] Pending
helpers_test.go:344: "busybox-mount" [de521a86-c11f-4762-a361-497600d2a72b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [de521a86-c11f-4762-a361-497600d2a72b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [de521a86-c11f-4762-a361-497600d2a72b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006909795s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-994983 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdany-port333450510/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-994983
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 image save --daemon gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-994983 image save --daemon gcr.io/google-containers/addon-resizer:functional-994983 --alsologtostderr: (2.263723817s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-994983
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdspecific-port1689849970/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.076611ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdspecific-port1689849970/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 ssh "sudo umount -f /mount-9p": exit status 1 (393.766003ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-994983 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdspecific-port1689849970/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472920255/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472920255/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472920255/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T" /mount1: exit status 1 (482.999031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-994983 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-994983 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472920255/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472920255/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-994983 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2472920255/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-994983
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-994983
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-994983
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (87.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-988346 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0717 22:08:20.799619  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-988346 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m27.481887642s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (87.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988346 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-988346 addons enable ingress --alsologtostderr -v=5: (14.792817912s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-988346 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-119884 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-119884 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (37.90551353s)
--- PASS: TestJSONOutput/start/Command (37.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-119884 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-119884 --output=json --user=testUser
E0717 22:13:04.661467  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-119884 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-119884 --output=json --user=testUser: (5.753450976s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-251201 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-251201 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.460275ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"93541260-fec6-4daa-b484-d7e6290b11f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-251201] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"298b04e1-0d86-4017-8c48-43f769852859","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"a209cfb8-16d6-4786-a6a5-fedefc34ddaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3008c450-6e1c-4b67-9dff-3926ed71c4f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig"}}
	{"specversion":"1.0","id":"1fe88616-1c44-418f-b9b5-208aaf7dae30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube"}}
	{"specversion":"1.0","id":"b11d1407-1b20-4550-a40b-dd7a925956b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d6e40985-5a00-4b7d-88f2-1023af7b335a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"19c31468-7bcd-44ca-bb16-a22d28783661","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-251201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-251201
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-786909 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-786909 --network=: (36.418592495s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-786909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-786909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-786909: (1.995039523s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-308468 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-308468 --network=bridge: (23.050268506s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-308468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-308468
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-308468: (1.921930379s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.99s)

                                                
                                    
x
+
TestKicExistingNetwork (27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-588570 --network=existing-network
E0717 22:14:21.623571  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:21.628851  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:21.639155  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:21.659534  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:21.699886  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:21.780294  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:21.940743  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:22.261337  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:22.902063  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:24.182772  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:26.582492  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:14:26.743883  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:31.864920  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:14:42.105127  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-588570 --network=existing-network: (25.000247302s)
helpers_test.go:175: Cleaning up "existing-network-588570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-588570
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-588570: (1.86650327s)
--- PASS: TestKicExistingNetwork (27.00s)

                                                
                                    
x
+
TestKicCustomSubnet (28.9s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-059147 --subnet=192.168.60.0/24
E0717 22:15:02.585680  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-059147 --subnet=192.168.60.0/24: (26.85972325s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-059147 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-059147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-059147
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-059147: (2.022129418s)
--- PASS: TestKicCustomSubnet (28.90s)

                                                
                                    
x
+
TestKicStaticIP (27.68s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-093695 --static-ip=192.168.200.200
E0717 22:15:36.955601  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-093695 --static-ip=192.168.200.200: (25.567107759s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-093695 ip
helpers_test.go:175: Cleaning up "static-ip-093695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-093695
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-093695: (1.988022519s)
--- PASS: TestKicStaticIP (27.68s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (47.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-799889 --driver=docker  --container-runtime=crio
E0717 22:15:43.546766  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-799889 --driver=docker  --container-runtime=crio: (21.90141994s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-803156 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-803156 --driver=docker  --container-runtime=crio: (20.154000017s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-799889
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-803156
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-803156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-803156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-803156: (1.816554108s)
helpers_test.go:175: Cleaning up "first-799889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-799889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-799889: (2.229274578s)
--- PASS: TestMinikubeProfile (47.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-489605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-489605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.200216648s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-489605 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-507702 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-507702 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.301565097s)
E0717 22:16:42.739961  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (5.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507702 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-489605 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-489605 --alsologtostderr -v=5: (1.63938289s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507702 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-507702
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-507702: (1.194281269s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-507702
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-507702: (6.10070333s)
--- PASS: TestMountStart/serial/RestartStopped (7.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-507702 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (53.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-265316 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 22:17:05.467993  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:17:10.422779  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-265316 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.415243581s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (53.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-265316 -- rollout status deployment/busybox: (3.130628127s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-chlgz -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-dhkzz -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-chlgz -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-dhkzz -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-chlgz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-265316 -- exec busybox-67b7f59bb-dhkzz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-265316 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-265316 -v 3 --alsologtostderr: (17.855243616s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.46s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp testdata/cp-test.txt multinode-265316:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3910316565/001/cp-test_multinode-265316.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316:/home/docker/cp-test.txt multinode-265316-m02:/home/docker/cp-test_multinode-265316_multinode-265316-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m02 "sudo cat /home/docker/cp-test_multinode-265316_multinode-265316-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316:/home/docker/cp-test.txt multinode-265316-m03:/home/docker/cp-test_multinode-265316_multinode-265316-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m03 "sudo cat /home/docker/cp-test_multinode-265316_multinode-265316-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp testdata/cp-test.txt multinode-265316-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3910316565/001/cp-test_multinode-265316-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316-m02:/home/docker/cp-test.txt multinode-265316:/home/docker/cp-test_multinode-265316-m02_multinode-265316.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316 "sudo cat /home/docker/cp-test_multinode-265316-m02_multinode-265316.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316-m02:/home/docker/cp-test.txt multinode-265316-m03:/home/docker/cp-test_multinode-265316-m02_multinode-265316-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m03 "sudo cat /home/docker/cp-test_multinode-265316-m02_multinode-265316-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp testdata/cp-test.txt multinode-265316-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3910316565/001/cp-test_multinode-265316-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316-m03:/home/docker/cp-test.txt multinode-265316:/home/docker/cp-test_multinode-265316-m03_multinode-265316.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316 "sudo cat /home/docker/cp-test_multinode-265316-m03_multinode-265316.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 cp multinode-265316-m03:/home/docker/cp-test.txt multinode-265316-m02:/home/docker/cp-test_multinode-265316-m03_multinode-265316-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 ssh -n multinode-265316-m02 "sudo cat /home/docker/cp-test_multinode-265316-m03_multinode-265316-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-265316 node stop m03: (1.19151984s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-265316 status: exit status 7 (462.324806ms)

                                                
                                                
-- stdout --
	multinode-265316
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-265316-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-265316-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-265316 status --alsologtostderr: exit status 7 (454.716829ms)

                                                
                                                
-- stdout --
	multinode-265316
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-265316-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-265316-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:18:26.923133  323050 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:18:26.923258  323050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:18:26.923270  323050 out.go:309] Setting ErrFile to fd 2...
	I0717 22:18:26.923277  323050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:18:26.923519  323050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:18:26.923687  323050 out.go:303] Setting JSON to false
	I0717 22:18:26.923718  323050 mustload.go:65] Loading cluster: multinode-265316
	I0717 22:18:26.923750  323050 notify.go:220] Checking for updates...
	I0717 22:18:26.924068  323050 config.go:182] Loaded profile config "multinode-265316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:18:26.924085  323050 status.go:255] checking status of multinode-265316 ...
	I0717 22:18:26.924471  323050 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Status}}
	I0717 22:18:26.941704  323050 status.go:330] multinode-265316 host status = "Running" (err=<nil>)
	I0717 22:18:26.941761  323050 host.go:66] Checking if "multinode-265316" exists ...
	I0717 22:18:26.942077  323050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265316
	I0717 22:18:26.960319  323050 host.go:66] Checking if "multinode-265316" exists ...
	I0717 22:18:26.960675  323050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:18:26.960737  323050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316
	I0717 22:18:26.980002  323050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316/id_rsa Username:docker}
	I0717 22:18:27.068740  323050 ssh_runner.go:195] Run: systemctl --version
	I0717 22:18:27.072568  323050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:18:27.082530  323050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:18:27.141546  323050 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:57 SystemTime:2023-07-17 22:18:27.132797095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:18:27.142088  323050 kubeconfig.go:92] found "multinode-265316" server: "https://192.168.58.2:8443"
	I0717 22:18:27.142108  323050 api_server.go:166] Checking apiserver status ...
	I0717 22:18:27.142149  323050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 22:18:27.152273  323050 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup
	I0717 22:18:27.160682  323050 api_server.go:182] apiserver freezer: "11:freezer:/docker/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/crio/crio-3cfffa8e02b88add1925b5ac270d48daef7bfd03d826f21c24adb900f3b16d31"
	I0717 22:18:27.160732  323050 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/529ff452b9ecf25960278abb78646d8b0e8f53a086960470b4fafcfa793123af/crio/crio-3cfffa8e02b88add1925b5ac270d48daef7bfd03d826f21c24adb900f3b16d31/freezer.state
	I0717 22:18:27.167908  323050 api_server.go:204] freezer state: "THAWED"
	I0717 22:18:27.167939  323050 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 22:18:27.173274  323050 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 22:18:27.173298  323050 status.go:421] multinode-265316 apiserver status = Running (err=<nil>)
	I0717 22:18:27.173308  323050 status.go:257] multinode-265316 status: &{Name:multinode-265316 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 22:18:27.173324  323050 status.go:255] checking status of multinode-265316-m02 ...
	I0717 22:18:27.173537  323050 cli_runner.go:164] Run: docker container inspect multinode-265316-m02 --format={{.State.Status}}
	I0717 22:18:27.190088  323050 status.go:330] multinode-265316-m02 host status = "Running" (err=<nil>)
	I0717 22:18:27.190116  323050 host.go:66] Checking if "multinode-265316-m02" exists ...
	I0717 22:18:27.190349  323050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-265316-m02
	I0717 22:18:27.208937  323050 host.go:66] Checking if "multinode-265316-m02" exists ...
	I0717 22:18:27.209477  323050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 22:18:27.209536  323050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-265316-m02
	I0717 22:18:27.224905  323050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16899-218877/.minikube/machines/multinode-265316-m02/id_rsa Username:docker}
	I0717 22:18:27.311931  323050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 22:18:27.321788  323050 status.go:257] multinode-265316-m02 status: &{Name:multinode-265316-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 22:18:27.321826  323050 status.go:255] checking status of multinode-265316-m03 ...
	I0717 22:18:27.322063  323050 cli_runner.go:164] Run: docker container inspect multinode-265316-m03 --format={{.State.Status}}
	I0717 22:18:27.337621  323050 status.go:330] multinode-265316-m03 host status = "Stopped" (err=<nil>)
	I0717 22:18:27.337644  323050 status.go:343] host is not running, skipping remaining checks
	I0717 22:18:27.337650  323050 status.go:257] multinode-265316-m03 status: &{Name:multinode-265316-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-265316 node start m03 --alsologtostderr: (10.007453033s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-265316
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-265316
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-265316: (24.808888073s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-265316 --wait=true -v=8 --alsologtostderr
E0717 22:19:21.623357  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:19:49.309155  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-265316 --wait=true -v=8 --alsologtostderr: (1m27.071174573s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-265316
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-265316 node delete m03: (4.059417677s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 stop
E0717 22:20:36.956499  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-265316 stop: (23.670485762s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-265316 status: exit status 7 (82.953855ms)

                                                
                                                
-- stdout --
	multinode-265316
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-265316-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-265316 status --alsologtostderr: exit status 7 (85.098012ms)

                                                
                                                
-- stdout --
	multinode-265316
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-265316-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:20:58.431443  333165 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:20:58.431658  333165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:20:58.431667  333165 out.go:309] Setting ErrFile to fd 2...
	I0717 22:20:58.431671  333165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:20:58.431864  333165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:20:58.432037  333165 out.go:303] Setting JSON to false
	I0717 22:20:58.432070  333165 mustload.go:65] Loading cluster: multinode-265316
	I0717 22:20:58.432184  333165 notify.go:220] Checking for updates...
	I0717 22:20:58.432409  333165 config.go:182] Loaded profile config "multinode-265316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:20:58.432422  333165 status.go:255] checking status of multinode-265316 ...
	I0717 22:20:58.432771  333165 cli_runner.go:164] Run: docker container inspect multinode-265316 --format={{.State.Status}}
	I0717 22:20:58.452143  333165 status.go:330] multinode-265316 host status = "Stopped" (err=<nil>)
	I0717 22:20:58.452191  333165 status.go:343] host is not running, skipping remaining checks
	I0717 22:20:58.452203  333165 status.go:257] multinode-265316 status: &{Name:multinode-265316 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 22:20:58.452270  333165 status.go:255] checking status of multinode-265316-m02 ...
	I0717 22:20:58.452681  333165 cli_runner.go:164] Run: docker container inspect multinode-265316-m02 --format={{.State.Status}}
	I0717 22:20:58.471085  333165 status.go:330] multinode-265316-m02 host status = "Stopped" (err=<nil>)
	I0717 22:20:58.471128  333165 status.go:343] host is not running, skipping remaining checks
	I0717 22:20:58.471137  333165 status.go:257] multinode-265316-m02 status: &{Name:multinode-265316-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-265316 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 22:21:42.740911  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
E0717 22:22:00.001542  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-265316 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.075161857s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-265316 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-265316
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-265316-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-265316-m02 --driver=docker  --container-runtime=crio: exit status 14 (66.763011ms)

                                                
                                                
-- stdout --
	* [multinode-265316-m02] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-265316-m02' is duplicated with machine name 'multinode-265316-m02' in profile 'multinode-265316'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-265316-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-265316-m03 --driver=docker  --container-runtime=crio: (22.113788352s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-265316
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-265316: exit status 80 (260.315634ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-265316
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-265316-m03 already exists in multinode-265316-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-265316-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-265316-m03: (1.851291005s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.34s)

                                                
                                    
x
+
TestPreload (141.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-542855 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-542855 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m11.848570921s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-542855 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-542855 image pull gcr.io/k8s-minikube/busybox: (2.552190982s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-542855
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-542855: (5.727447651s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-542855 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0717 22:24:21.623653  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-542855 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (58.484084129s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-542855 image list
helpers_test.go:175: Cleaning up "test-preload-542855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-542855
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-542855: (2.269764258s)
--- PASS: TestPreload (141.09s)

                                                
                                    
x
+
TestScheduledStopUnix (101.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-399782 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-399782 --memory=2048 --driver=docker  --container-runtime=crio: (25.513734974s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-399782 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-399782 -n scheduled-stop-399782
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-399782 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-399782 --cancel-scheduled
E0717 22:25:36.956113  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-399782 -n scheduled-stop-399782
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-399782
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-399782 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 22:26:42.740739  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-399782
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-399782: exit status 7 (60.90866ms)

                                                
                                                
-- stdout --
	scheduled-stop-399782
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-399782 -n scheduled-stop-399782
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-399782 -n scheduled-stop-399782: exit status 7 (61.502275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-399782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-399782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-399782: (4.586678499s)
--- PASS: TestScheduledStopUnix (101.39s)

                                                
                                    
x
+
TestInsufficientStorage (12.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-582591 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-582591 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.35334281s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64804b90-0cb1-4033-95c8-4d41f2c8bef2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-582591] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6adcd0e-edc8-4821-9227-03dbb34960c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"661a063f-76c9-4d28-aa9e-1d3179a2cbb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7fbee41b-a1d5-46a6-8981-b4937bede06b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig"}}
	{"specversion":"1.0","id":"69830984-c759-45a2-aef2-1c5f63295622","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube"}}
	{"specversion":"1.0","id":"2739e107-346a-47cf-8678-91be69329acd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"95a1e654-60b1-4669-9ff1-877d9688873f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0fe84acb-4410-4d59-8651-84c88cfdd8d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"41cf65e3-cce0-410b-9e2a-9ad9da07a4a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8f914826-2992-45f1-9bef-900800838a99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"867e82a0-7141-4284-a466-d8a5e319bb8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"111aaae0-ba0f-4ebb-bd66-1903e1687232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-582591 in cluster insufficient-storage-582591","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"38fdda7b-ba81-48cb-acbd-7daf08286a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9048815-e66f-4359-bd89-1cce9585dead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c7d4d52-4184-4710-b3c5-1a269cf11fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-582591 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-582591 --output=json --layout=cluster: exit status 7 (276.335041ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-582591","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-582591","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:26:59.232677  354874 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-582591" does not appear in /home/jenkins/minikube-integration/16899-218877/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-582591 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-582591 --output=json --layout=cluster: exit status 7 (262.406209ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-582591","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-582591","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 22:26:59.494973  354959 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-582591" does not appear in /home/jenkins/minikube-integration/16899-218877/kubeconfig
	E0717 22:26:59.505033  354959 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/insufficient-storage-582591/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-582591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-582591
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-582591: (1.823205907s)
--- PASS: TestInsufficientStorage (12.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (366.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-157727 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0717 22:28:05.783825  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-157727 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.716035503s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-157727
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-157727: (1.251891644s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-157727 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-157727 status --format={{.Host}}: exit status 7 (66.356503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-157727 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-157727 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m34.571333975s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-157727 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-157727 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-157727 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (74.067053ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-157727] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-157727
	    minikube start -p kubernetes-upgrade-157727 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1577272 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-157727 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-157727 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-157727 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.225093626s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-157727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-157727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-157727: (2.353256022s)
--- PASS: TestKubernetesUpgrade (366.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.0.3565049660.exe start -p missing-upgrade-059561 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.0.3565049660.exe start -p missing-upgrade-059561 --memory=2200 --driver=docker  --container-runtime=crio: (1m10.639755756s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-059561
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-059561
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-059561 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-059561 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.395284913s)
helpers_test.go:175: Cleaning up "missing-upgrade-059561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-059561
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-059561: (2.00649441s)
--- PASS: TestMissingContainerUpgrade (142.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-152685 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-152685 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (75.255532ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-152685] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-152685 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-152685 --driver=docker  --container-runtime=crio: (31.759689042s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-152685 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-152685 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-152685 --no-kubernetes --driver=docker  --container-runtime=crio: (3.385087636s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-152685 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-152685 status -o json: exit status 2 (286.101233ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-152685","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-152685
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-152685: (1.990016837s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-152685 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-152685 --no-kubernetes --driver=docker  --container-runtime=crio: (10.946939474s)
--- PASS: TestNoKubernetes/serial/Start (10.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-152685 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-152685 "sudo systemctl is-active --quiet service kubelet": exit status 1 (298.882566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-152685
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-152685: (1.366516699s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-152685 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-152685 --driver=docker  --container-runtime=crio: (8.0035035s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-152685 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-152685 "sudo systemctl is-active --quiet service kubelet": exit status 1 (337.693641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-173210
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-365981 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-365981 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (160.245856ms)

                                                
                                                
-- stdout --
	* [false-365981] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 22:29:11.579765  392892 out.go:296] Setting OutFile to fd 1 ...
	I0717 22:29:11.579894  392892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:29:11.579905  392892 out.go:309] Setting ErrFile to fd 2...
	I0717 22:29:11.579912  392892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 22:29:11.580125  392892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16899-218877/.minikube/bin
	I0717 22:29:11.580790  392892 out.go:303] Setting JSON to false
	I0717 22:29:11.582406  392892 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7896,"bootTime":1689625056,"procs":866,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 22:29:11.582472  392892 start.go:138] virtualization: kvm guest
	I0717 22:29:11.584972  392892 out.go:177] * [false-365981] minikube v1.31.0 on Ubuntu 20.04 (kvm/amd64)
	I0717 22:29:11.586551  392892 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 22:29:11.587975  392892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 22:29:11.586580  392892 notify.go:220] Checking for updates...
	I0717 22:29:11.591812  392892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16899-218877/kubeconfig
	I0717 22:29:11.593133  392892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16899-218877/.minikube
	I0717 22:29:11.594260  392892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 22:29:11.595568  392892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 22:29:11.597619  392892 config.go:182] Loaded profile config "force-systemd-env-452980": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:29:11.597768  392892 config.go:182] Loaded profile config "kubernetes-upgrade-157727": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 22:29:11.597896  392892 config.go:182] Loaded profile config "missing-upgrade-059561": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 22:29:11.598042  392892 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 22:29:11.620681  392892 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 22:29:11.620782  392892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 22:29:11.684222  392892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:57 SystemTime:2023-07-17 22:29:11.674463018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 22:29:11.684385  392892 docker.go:294] overlay module found
	I0717 22:29:11.686739  392892 out.go:177] * Using the docker driver based on user configuration
	I0717 22:29:11.688266  392892 start.go:298] selected driver: docker
	I0717 22:29:11.688285  392892 start.go:880] validating driver "docker" against <nil>
	I0717 22:29:11.688305  392892 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 22:29:11.690894  392892 out.go:177] 
	W0717 22:29:11.693112  392892 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 22:29:11.694612  392892 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-365981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-365981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt
server: https://127.0.0.1:32939
name: missing-upgrade-059561
contexts:
- context:
cluster: missing-upgrade-059561
user: missing-upgrade-059561
name: missing-upgrade-059561
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-059561
user:
client-certificate: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/missing-upgrade-059561/client.crt
client-key: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/missing-upgrade-059561/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-365981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-365981"

                                                
                                                
----------------------- debugLogs end: false-365981 [took: 2.740523477s] --------------------------------
helpers_test.go:175: Cleaning up "false-365981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-365981
--- PASS: TestNetworkPlugins/group/false (3.05s)

                                                
                                    
x
+
TestPause/serial/Start (42.35s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-741549 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-741549 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (42.354404489s)
--- PASS: TestPause/serial/Start (42.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0717 22:30:36.955422  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:30:44.670260  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.194915965s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-741549 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-741549 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.967808589s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-365981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-365981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-pjfx7" [6186a3a8-0466-4871-b4bc-787e13e6ade4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-pjfx7" [6186a3a8-0466-4871-b4bc-787e13e6ade4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006043274s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-365981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-741549 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.815866018s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-741549 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-741549 --output=json --layout=cluster: exit status 2 (411.886054ms)

                                                
                                                
-- stdout --
	{"Name":"pause-741549","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-741549","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-741549 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-741549 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-741549 --alsologtostderr -v=5
E0717 22:31:42.740694  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-741549 --alsologtostderr -v=5: (4.09652817s)
--- PASS: TestPause/serial/DeletePaused (4.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-741549
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-741549: exit status 1 (18.168092ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-741549: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.436712821s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xm22n" [896bb648-db6d-474d-bac3-476b91845b36] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.02073461s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-365981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-365981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hl7t4" [3ae9a967-ef72-4541-88e0-eaeab432732b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hl7t4" [3ae9a967-ef72-4541-88e0-eaeab432732b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007019331s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-365981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v7hll" [fb495c83-1d60-43e2-ba99-daec53756730] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018229446s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-365981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-365981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-swdcc" [70d85d3d-36f2-4bb9-b117-96a3a9e8fa17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-swdcc" [70d85d3d-36f2-4bb9-b117-96a3a9e8fa17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006775284s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.169790297s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-365981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.228220482s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.591076613s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-365981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-365981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-2jsrt" [fd662786-d696-401c-9c08-17445433b120] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-2jsrt" [fd662786-d696-401c-9c08-17445433b120] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006920699s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-365981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-365981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-b69sv" [4e97e550-7af8-44e8-9f0e-f108ecb57ea1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-b69sv" [4e97e550-7af8-44e8-9f0e-f108ecb57ea1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.007954873s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-365981 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (42.737958919s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-365981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-365981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (137.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-352922 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-352922 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m17.896102753s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (137.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-98ft6" [e6d7a41d-4433-4f20-9f3f-ac31f7fcd7f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.01596671s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-365981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-365981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-pcmwx" [cefd09c7-1d49-4ed8-b76a-0336195c8da5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-pcmwx" [cefd09c7-1d49-4ed8-b76a-0336195c8da5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.007534182s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-959453 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-959453 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m5.719616501s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-365981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-365981 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-365981 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-v9lwr" [fcb90a18-64d9-4bae-91f7-65f7b68ddec5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-v9lwr" [fcb90a18-64d9-4bae-91f7-65f7b68ddec5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.008425139s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-365981 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-365981 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-430751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-430751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (44.987963403s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-612535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:35:36.955593  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-612535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (37.906621025s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-959453 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1394bef1-ba84-48c7-8f43-0b681cff8dfe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1394bef1-ba84-48c7-8f43-0b681cff8dfe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.015766126s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-959453 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-959453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-959453 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-959453 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-959453 --alsologtostderr -v=3: (11.950366093s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-430751 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [328eff44-04d6-471e-a045-5e0dd72f24fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [328eff44-04d6-471e-a045-5e0dd72f24fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.012639856s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-430751 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-612535 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-612535 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-612535 --alsologtostderr -v=3: (1.187711315s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-612535 -n newest-cni-612535
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-612535 -n newest-cni-612535: exit status 7 (62.245852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-612535 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-612535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-612535 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (26.700837092s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-612535 -n newest-cni-612535
E0717 22:36:31.264378  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-430751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-430751 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-430751 --alsologtostderr -v=3
E0717 22:36:10.781742  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:36:10.787043  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:36:10.797370  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:36:10.818426  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-430751 --alsologtostderr -v=3: (11.940007024s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-959453 -n no-preload-959453
E0717 22:36:10.859455  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-959453 -n no-preload-959453: exit status 7 (68.48745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-959453 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0717 22:36:10.940344  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (334.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-959453 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:36:11.101245  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:36:11.421376  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:36:12.062050  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:36:13.342916  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:36:15.903095  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-959453 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m34.523663676s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-959453 -n no-preload-959453
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (334.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751: exit status 7 (63.902566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-430751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-430751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:36:21.024077  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-430751 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m34.710595377s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-612535 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-612535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-612535 -n newest-cni-612535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-612535 -n newest-cni-612535: exit status 2 (306.995957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-612535 -n newest-cni-612535
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-612535 -n newest-cni-612535: exit status 2 (324.560502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-612535 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-612535 -n newest-cni-612535
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-612535 -n newest-cni-612535
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-961084 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:36:42.740099  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-961084 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (41.096294156s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-352922 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2fa63969-3165-4276-bf6e-6cc48b966246] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0717 22:36:51.744693  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2fa63969-3165-4276-bf6e-6cc48b966246] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.014903626s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-352922 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-352922 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-352922 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-352922 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-352922 --alsologtostderr -v=3: (12.027824575s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-352922 -n old-k8s-version-352922
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-352922 -n old-k8s-version-352922: exit status 7 (75.116735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-352922 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (432.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-352922 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-352922 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m12.018463522s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-352922 -n old-k8s-version-352922
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (432.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-961084 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da5fa102-89ca-4577-a034-93576489ef29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0717 22:37:21.089099  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:21.094377  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:21.104718  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:21.125044  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:21.165387  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:21.245754  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:21.406470  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:21.727125  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:22.367659  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
helpers_test.go:344: "busybox" [da5fa102-89ca-4577-a034-93576489ef29] Running
E0717 22:37:23.647882  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:26.208228  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.014708397s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-961084 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-961084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-961084 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-961084 --alsologtostderr -v=3
E0717 22:37:31.328975  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:37:32.705547  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:37:41.569317  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-961084 --alsologtostderr -v=3: (12.102964222s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-961084 -n embed-certs-961084
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-961084 -n embed-certs-961084: exit status 7 (78.599802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-961084 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (586.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-961084 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 22:37:48.510427  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:48.516401  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:48.527276  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:48.547659  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:48.588707  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:48.669322  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:48.829770  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:49.150258  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:49.790862  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:51.072060  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:53.632622  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:37:58.753440  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:38:02.050368  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:38:08.994460  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:38:29.474625  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:38:40.002008  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:38:43.010691  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:38:54.626389  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:39:02.597214  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:02.602477  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:02.612763  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:02.633064  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:02.673407  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:02.753819  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:02.914233  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:03.234816  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:03.875785  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:05.156063  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:07.716450  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:09.608274  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:09.613582  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:09.623850  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:09.644168  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:09.684501  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:09.764859  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:09.925226  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:10.245619  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:10.435278  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:39:10.886065  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:12.166995  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:12.837551  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:14.727684  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:19.848672  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:21.622544  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/ingress-addon-legacy-988346/client.crt: no such file or directory
E0717 22:39:23.078470  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:30.089827  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:34.413827  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:34.419110  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:34.429421  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:34.449790  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:34.490164  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:34.570568  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:34.731058  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:35.051986  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:35.692606  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:36.973543  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:39.534086  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:43.559072  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:39:44.655247  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:50.570177  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:39:53.217256  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:53.222565  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:53.232912  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:53.253260  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:53.293583  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:53.373923  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:53.534034  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:53.854545  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:54.495501  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:54.895629  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:39:55.776625  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:39:58.337194  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:40:03.457709  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:40:04.931551  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/kindnet-365981/client.crt: no such file or directory
E0717 22:40:13.697876  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:40:15.375946  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:40:24.519893  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
E0717 22:40:31.530454  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
E0717 22:40:32.356401  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/calico-365981/client.crt: no such file or directory
E0717 22:40:34.178745  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:40:36.955838  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/addons-759450/client.crt: no such file or directory
E0717 22:40:56.336928  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
E0717 22:41:10.782276  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:41:15.139005  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/bridge-365981/client.crt: no such file or directory
E0717 22:41:38.467178  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/auto-365981/client.crt: no such file or directory
E0717 22:41:42.740114  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/functional-994983/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-961084 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (9m46.273032002s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-961084 -n embed-certs-961084
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (586.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lp75f" [0d469c5c-a04e-4fa6-8d56-8a291cbe3ad4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0717 22:41:46.440687  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lp75f" [0d469c5c-a04e-4fa6-8d56-8a291cbe3ad4] Running
E0717 22:41:53.451439  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.014504719s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-gczw7" [bfc52cc2-d547-420b-b609-8208bfcc44f9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-gczw7" [bfc52cc2-d547-420b-b609-8208bfcc44f9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.014341234s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lp75f" [0d469c5c-a04e-4fa6-8d56-8a291cbe3ad4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008153433s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-959453 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-959453 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-959453 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-959453 -n no-preload-959453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-959453 -n no-preload-959453: exit status 2 (331.545093ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-959453 -n no-preload-959453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-959453 -n no-preload-959453: exit status 2 (341.372398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-959453 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-959453 -n no-preload-959453
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-959453 -n no-preload-959453
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-gczw7" [bfc52cc2-d547-420b-b609-8208bfcc44f9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006515732s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-430751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-430751 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-430751 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751: exit status 2 (303.24772ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751: exit status 2 (293.094068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-430751 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-430751 -n default-k8s-diff-port-430751
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-xvvl9" [f0bf273c-8935-4953-b9ec-96da92b528b6] Running
E0717 22:44:30.281813  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/custom-flannel-365981/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013541193s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-xvvl9" [f0bf273c-8935-4953-b9ec-96da92b528b6] Running
E0717 22:44:34.413042  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/flannel-365981/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006287508s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-352922 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-352922 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-352922 --alsologtostderr -v=1
E0717 22:44:37.291795  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/enable-default-cni-365981/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-352922 -n old-k8s-version-352922
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-352922 -n old-k8s-version-352922: exit status 2 (286.907745ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-352922 -n old-k8s-version-352922
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-352922 -n old-k8s-version-352922: exit status 2 (295.480459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-352922 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-352922 -n old-k8s-version-352922
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-352922 -n old-k8s-version-352922
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-hb5tt" [03090b08-727d-401a-a8b2-0fd8461f05bd] Running
E0717 22:47:32.065088  225642 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/old-k8s-version-352922/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015354935s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-hb5tt" [03090b08-727d-401a-a8b2-0fd8461f05bd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007689131s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-961084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-961084 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-961084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-961084 -n embed-certs-961084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-961084 -n embed-certs-961084: exit status 2 (287.322404ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-961084 -n embed-certs-961084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-961084 -n embed-certs-961084: exit status 2 (295.084848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-961084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-961084 -n embed-certs-961084
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-961084 -n embed-certs-961084
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    

Test skip (24/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-365981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-365981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt
server: https://127.0.0.1:32939
name: missing-upgrade-059561
contexts:
- context:
cluster: missing-upgrade-059561
user: missing-upgrade-059561
name: missing-upgrade-059561
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-059561
user:
client-certificate: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/missing-upgrade-059561/client.crt
client-key: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/missing-upgrade-059561/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-365981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-365981"

                                                
                                                
----------------------- debugLogs end: kubenet-365981 [took: 2.965121329s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-365981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-365981
--- SKIP: TestNetworkPlugins/group/kubenet (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-365981 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-365981" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16899-218877/.minikube/ca.crt
server: https://127.0.0.1:32939
name: missing-upgrade-059561
contexts:
- context:
cluster: missing-upgrade-059561
user: missing-upgrade-059561
name: missing-upgrade-059561
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-059561
user:
client-certificate: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/missing-upgrade-059561/client.crt
client-key: /home/jenkins/minikube-integration/16899-218877/.minikube/profiles/missing-upgrade-059561/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-365981

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-365981" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-365981"

                                                
                                                
----------------------- debugLogs end: cilium-365981 [took: 3.774590052s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-365981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-365981
--- SKIP: TestNetworkPlugins/group/cilium (3.93s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-526742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-526742
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard