Test Report: Docker_Linux_crio 17297

                    
                      d70abdd8c088cadcf8720531a75f8262065eb1b0:2023-09-25:31157
                    
                

Test fail (6/304)

Order failed test Duration
25 TestAddons/parallel/Ingress 151.12
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 7.56
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 180.25
204 TestMultiNode/serial/PingHostFrom2Pods 2.98
225 TestRunningBinaryUpgrade 74.73
233 TestStoppedBinaryUpgrade/Upgrade 98.4
x
+
TestAddons/parallel/Ingress (151.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-440446 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-440446 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-440446 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ccb2bf40-140d-4ceb-8efe-636811c3ed36] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ccb2bf40-140d-4ceb-8efe-636811c3ed36] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.008686121s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-440446 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.061643843s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-440446 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-440446 addons disable ingress --alsologtostderr -v=1: (7.575436268s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-440446
helpers_test.go:235: (dbg) docker inspect addons-440446:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "90eee19a54eaf1cb969df65e036d387c96e34da1113898f1345543c6bbcb923d",
	        "Created": "2023-09-25T10:34:08.332059327Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14069,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-25T10:34:08.618385467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:88569b0f9771c4a82a68adf2effda10d3720003bb7c688860ef975d692546171",
	        "ResolvConfPath": "/var/lib/docker/containers/90eee19a54eaf1cb969df65e036d387c96e34da1113898f1345543c6bbcb923d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/90eee19a54eaf1cb969df65e036d387c96e34da1113898f1345543c6bbcb923d/hostname",
	        "HostsPath": "/var/lib/docker/containers/90eee19a54eaf1cb969df65e036d387c96e34da1113898f1345543c6bbcb923d/hosts",
	        "LogPath": "/var/lib/docker/containers/90eee19a54eaf1cb969df65e036d387c96e34da1113898f1345543c6bbcb923d/90eee19a54eaf1cb969df65e036d387c96e34da1113898f1345543c6bbcb923d-json.log",
	        "Name": "/addons-440446",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-440446:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-440446",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c373ce72219cdba971213eca5ed3ba17115e9f020ea5669332f47a16ad355fb5-init/diff:/var/lib/docker/overlay2/f6c0857361d94c26f0cbf62f9795a30e8812e7f7d65e2dc29161b25ea9a7ede1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c373ce72219cdba971213eca5ed3ba17115e9f020ea5669332f47a16ad355fb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c373ce72219cdba971213eca5ed3ba17115e9f020ea5669332f47a16ad355fb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c373ce72219cdba971213eca5ed3ba17115e9f020ea5669332f47a16ad355fb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-440446",
	                "Source": "/var/lib/docker/volumes/addons-440446/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-440446",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-440446",
	                "name.minikube.sigs.k8s.io": "addons-440446",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc603ed40f4e69b12cee55659f74c0131b8c9e055513cce032d82fb0dc8bc59c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cc603ed40f4e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-440446": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "90eee19a54ea",
	                        "addons-440446"
	                    ],
	                    "NetworkID": "65cafc98a65495b309cbe5fbf6eadd78496f48ed76140eaa1b2c48cd07cc9ed7",
	                    "EndpointID": "456326a2a5fa2d5dacfa37e6b8ce7ec121fbc7da37ba597bbd52e113d358cd9d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-440446 -n addons-440446
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-440446 logs -n 25: (1.107043891s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-713911   | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |                     |
	|         | -p download-only-713911        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-713911   | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |                     |
	|         | -p download-only-713911        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC | 25 Sep 23 10:33 UTC |
	| delete  | -p download-only-713911        | download-only-713911   | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC | 25 Sep 23 10:33 UTC |
	| delete  | -p download-only-713911        | download-only-713911   | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC | 25 Sep 23 10:33 UTC |
	| start   | --download-only -p             | download-docker-747732 | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |                     |
	|         | download-docker-747732         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-747732      | download-docker-747732 | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC | 25 Sep 23 10:33 UTC |
	| start   | --download-only -p             | binary-mirror-732200   | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |                     |
	|         | binary-mirror-732200           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43745         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-732200        | binary-mirror-732200   | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC | 25 Sep 23 10:33 UTC |
	| start   | -p addons-440446               | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC | 25 Sep 23 10:35 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:35 UTC | 25 Sep 23 10:35 UTC |
	|         | addons-440446                  |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:35 UTC | 25 Sep 23 10:35 UTC |
	|         | addons-440446                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:35 UTC | 25 Sep 23 10:35 UTC |
	|         | -p addons-440446               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-440446 addons disable   | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:35 UTC | 25 Sep 23 10:35 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ip      | addons-440446 ip               | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:36 UTC | 25 Sep 23 10:36 UTC |
	| addons  | addons-440446 addons disable   | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:36 UTC | 25 Sep 23 10:36 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-440446 addons           | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:36 UTC | 25 Sep 23 10:36 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ssh     | addons-440446 ssh curl -s      | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-440446 addons           | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:37 UTC | 25 Sep 23 10:37 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-440446 addons           | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:37 UTC | 25 Sep 23 10:37 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-440446 ip               | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:38 UTC | 25 Sep 23 10:38 UTC |
	| addons  | addons-440446 addons disable   | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:38 UTC | 25 Sep 23 10:38 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-440446 addons disable   | addons-440446          | jenkins | v1.31.2 | 25 Sep 23 10:38 UTC | 25 Sep 23 10:38 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 10:33:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 10:33:44.698735   13399 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:33:44.698857   13399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:44.698865   13399 out.go:309] Setting ErrFile to fd 2...
	I0925 10:33:44.698870   13399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:44.699082   13399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 10:33:44.699671   13399 out.go:303] Setting JSON to false
	I0925 10:33:44.700425   13399 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":977,"bootTime":1695637048,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:33:44.700482   13399 start.go:138] virtualization: kvm guest
	I0925 10:33:44.703055   13399 out.go:177] * [addons-440446] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:33:44.704564   13399 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 10:33:44.705908   13399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:33:44.704601   13399 notify.go:220] Checking for updates...
	I0925 10:33:44.707275   13399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:33:44.708726   13399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 10:33:44.710287   13399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 10:33:44.711748   13399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 10:33:44.713349   13399 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:33:44.733133   13399 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 10:33:44.733215   13399 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:33:44.783276   13399 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-09-25 10:33:44.774941524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:33:44.783411   13399 docker.go:294] overlay module found
	I0925 10:33:44.785451   13399 out.go:177] * Using the docker driver based on user configuration
	I0925 10:33:44.787016   13399 start.go:298] selected driver: docker
	I0925 10:33:44.787035   13399 start.go:902] validating driver "docker" against <nil>
	I0925 10:33:44.787049   13399 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 10:33:44.787771   13399 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:33:44.837650   13399 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-09-25 10:33:44.830374187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:33:44.837802   13399 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 10:33:44.837978   13399 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 10:33:44.839863   13399 out.go:177] * Using Docker driver with root privileges
	I0925 10:33:44.841466   13399 cni.go:84] Creating CNI manager for ""
	I0925 10:33:44.841485   13399 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0925 10:33:44.841495   13399 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0925 10:33:44.841509   13399 start_flags.go:321] config:
	{Name:addons-440446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-440446 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:33:44.843254   13399 out.go:177] * Starting control plane node addons-440446 in cluster addons-440446
	I0925 10:33:44.844674   13399 cache.go:122] Beginning downloading kic base image for docker with crio
	I0925 10:33:44.846073   13399 out.go:177] * Pulling base image ...
	I0925 10:33:44.847459   13399 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:33:44.847497   13399 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0925 10:33:44.847507   13399 cache.go:57] Caching tarball of preloaded images
	I0925 10:33:44.847547   13399 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0925 10:33:44.847599   13399 preload.go:174] Found /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0925 10:33:44.847614   13399 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0925 10:33:44.847997   13399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/config.json ...
	I0925 10:33:44.848025   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/config.json: {Name:mk428186dbded0d1b9c747018f142e6579fd425f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:33:44.862150   13399 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0925 10:33:44.862233   13399 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I0925 10:33:44.862246   13399 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I0925 10:33:44.862250   13399 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I0925 10:33:44.862256   13399 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I0925 10:33:44.862263   13399 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from local cache
	I0925 10:33:55.561452   13399 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from cached tarball
	I0925 10:33:55.561489   13399 cache.go:195] Successfully downloaded all kic artifacts
	I0925 10:33:55.561521   13399 start.go:365] acquiring machines lock for addons-440446: {Name:mk5978c222bcc59cf4c2da4ffa87ec3e16493b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 10:33:55.561608   13399 start.go:369] acquired machines lock for "addons-440446" in 70.075µs
	I0925 10:33:55.561629   13399 start.go:93] Provisioning new machine with config: &{Name:addons-440446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-440446 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0925 10:33:55.561704   13399 start.go:125] createHost starting for "" (driver="docker")
	I0925 10:33:55.564471   13399 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0925 10:33:55.564680   13399 start.go:159] libmachine.API.Create for "addons-440446" (driver="docker")
	I0925 10:33:55.564710   13399 client.go:168] LocalClient.Create starting
	I0925 10:33:55.564798   13399 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem
	I0925 10:33:55.874972   13399 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem
	I0925 10:33:55.967960   13399 cli_runner.go:164] Run: docker network inspect addons-440446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0925 10:33:55.982866   13399 cli_runner.go:211] docker network inspect addons-440446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0925 10:33:55.982925   13399 network_create.go:281] running [docker network inspect addons-440446] to gather additional debugging logs...
	I0925 10:33:55.982941   13399 cli_runner.go:164] Run: docker network inspect addons-440446
	W0925 10:33:55.996806   13399 cli_runner.go:211] docker network inspect addons-440446 returned with exit code 1
	I0925 10:33:55.996828   13399 network_create.go:284] error running [docker network inspect addons-440446]: docker network inspect addons-440446: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-440446 not found
	I0925 10:33:55.996844   13399 network_create.go:286] output of [docker network inspect addons-440446]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-440446 not found
	
	** /stderr **
	I0925 10:33:55.996883   13399 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0925 10:33:56.011994   13399 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d30790}
	I0925 10:33:56.012040   13399 network_create.go:123] attempt to create docker network addons-440446 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0925 10:33:56.012082   13399 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-440446 addons-440446
	I0925 10:33:56.062095   13399 network_create.go:107] docker network addons-440446 192.168.49.0/24 created
	I0925 10:33:56.062135   13399 kic.go:117] calculated static IP "192.168.49.2" for the "addons-440446" container
	I0925 10:33:56.062204   13399 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0925 10:33:56.076602   13399 cli_runner.go:164] Run: docker volume create addons-440446 --label name.minikube.sigs.k8s.io=addons-440446 --label created_by.minikube.sigs.k8s.io=true
	I0925 10:33:56.093582   13399 oci.go:103] Successfully created a docker volume addons-440446
	I0925 10:33:56.093646   13399 cli_runner.go:164] Run: docker run --rm --name addons-440446-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-440446 --entrypoint /usr/bin/test -v addons-440446:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0925 10:34:03.295285   13399 cli_runner.go:217] Completed: docker run --rm --name addons-440446-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-440446 --entrypoint /usr/bin/test -v addons-440446:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (7.201601088s)
	I0925 10:34:03.295318   13399 oci.go:107] Successfully prepared a docker volume addons-440446
	I0925 10:34:03.295338   13399 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:34:03.295359   13399 kic.go:190] Starting extracting preloaded images to volume ...
	I0925 10:34:03.295411   13399 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-440446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0925 10:34:08.264922   13399 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-440446:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.96947694s)
	I0925 10:34:08.264955   13399 kic.go:199] duration metric: took 4.969592 seconds to extract preloaded images to volume
	W0925 10:34:08.265091   13399 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0925 10:34:08.265202   13399 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0925 10:34:08.318268   13399 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-440446 --name addons-440446 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-440446 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-440446 --network addons-440446 --ip 192.168.49.2 --volume addons-440446:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0925 10:34:08.627127   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Running}}
	I0925 10:34:08.643608   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:08.661191   13399 cli_runner.go:164] Run: docker exec addons-440446 stat /var/lib/dpkg/alternatives/iptables
	I0925 10:34:08.704544   13399 oci.go:144] the created container "addons-440446" has a running status.
	I0925 10:34:08.704573   13399 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa...
	I0925 10:34:09.107161   13399 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0925 10:34:09.128680   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:09.143938   13399 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0925 10:34:09.143959   13399 kic_runner.go:114] Args: [docker exec --privileged addons-440446 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0925 10:34:09.215180   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:09.232099   13399 machine.go:88] provisioning docker machine ...
	I0925 10:34:09.232133   13399 ubuntu.go:169] provisioning hostname "addons-440446"
	I0925 10:34:09.232197   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:09.251605   13399 main.go:141] libmachine: Using SSH client type: native
	I0925 10:34:09.251926   13399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0925 10:34:09.251941   13399 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-440446 && echo "addons-440446" | sudo tee /etc/hostname
	I0925 10:34:09.386126   13399 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-440446
	
	I0925 10:34:09.386212   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:09.402303   13399 main.go:141] libmachine: Using SSH client type: native
	I0925 10:34:09.402611   13399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0925 10:34:09.402630   13399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-440446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-440446/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-440446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 10:34:09.528153   13399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 10:34:09.528185   13399 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17297-5744/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-5744/.minikube}
	I0925 10:34:09.528209   13399 ubuntu.go:177] setting up certificates
	I0925 10:34:09.528217   13399 provision.go:83] configureAuth start
	I0925 10:34:09.528259   13399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-440446
	I0925 10:34:09.544101   13399 provision.go:138] copyHostCerts
	I0925 10:34:09.544181   13399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem (1078 bytes)
	I0925 10:34:09.544303   13399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem (1123 bytes)
	I0925 10:34:09.544379   13399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem (1675 bytes)
	I0925 10:34:09.544436   13399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem org=jenkins.addons-440446 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-440446]
	I0925 10:34:09.740079   13399 provision.go:172] copyRemoteCerts
	I0925 10:34:09.740141   13399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 10:34:09.740176   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:09.755796   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:09.844549   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 10:34:09.864356   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0925 10:34:09.883937   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 10:34:09.902901   13399 provision.go:86] duration metric: configureAuth took 374.674429ms
	I0925 10:34:09.902925   13399 ubuntu.go:193] setting minikube options for container-runtime
	I0925 10:34:09.903069   13399 config.go:182] Loaded profile config "addons-440446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:34:09.903150   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:09.919221   13399 main.go:141] libmachine: Using SSH client type: native
	I0925 10:34:09.919628   13399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0925 10:34:09.919657   13399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0925 10:34:10.125474   13399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0925 10:34:10.125498   13399 machine.go:91] provisioned docker machine in 893.378906ms
	I0925 10:34:10.125509   13399 client.go:171] LocalClient.Create took 14.560791062s
	I0925 10:34:10.125525   13399 start.go:167] duration metric: libmachine.API.Create for "addons-440446" took 14.560844578s
	I0925 10:34:10.125534   13399 start.go:300] post-start starting for "addons-440446" (driver="docker")
	I0925 10:34:10.125545   13399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 10:34:10.125594   13399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 10:34:10.125631   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:10.141709   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:10.232703   13399 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 10:34:10.235395   13399 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0925 10:34:10.235422   13399 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0925 10:34:10.235434   13399 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0925 10:34:10.235441   13399 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0925 10:34:10.235451   13399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/addons for local assets ...
	I0925 10:34:10.235510   13399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/files for local assets ...
	I0925 10:34:10.235532   13399 start.go:303] post-start completed in 109.992996ms
	I0925 10:34:10.235786   13399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-440446
	I0925 10:34:10.251526   13399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/config.json ...
	I0925 10:34:10.251771   13399 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 10:34:10.251807   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:10.267543   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:10.353439   13399 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0925 10:34:10.357065   13399 start.go:128] duration metric: createHost completed in 14.795348622s
	I0925 10:34:10.357082   13399 start.go:83] releasing machines lock for "addons-440446", held for 14.795463648s
	I0925 10:34:10.357152   13399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-440446
	I0925 10:34:10.373004   13399 ssh_runner.go:195] Run: cat /version.json
	I0925 10:34:10.373061   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:10.373128   13399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 10:34:10.373195   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:10.390225   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:10.391158   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:10.566834   13399 ssh_runner.go:195] Run: systemctl --version
	I0925 10:34:10.570591   13399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0925 10:34:10.704959   13399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0925 10:34:10.709306   13399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 10:34:10.726411   13399 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0925 10:34:10.726486   13399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 10:34:10.752090   13399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0925 10:34:10.752111   13399 start.go:469] detecting cgroup driver to use...
	I0925 10:34:10.752142   13399 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0925 10:34:10.752187   13399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 10:34:10.764871   13399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 10:34:10.774316   13399 docker.go:197] disabling cri-docker service (if available) ...
	I0925 10:34:10.774370   13399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0925 10:34:10.786125   13399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0925 10:34:10.797990   13399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0925 10:34:10.873303   13399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0925 10:34:10.949329   13399 docker.go:213] disabling docker service ...
	I0925 10:34:10.949401   13399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0925 10:34:10.965665   13399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0925 10:34:10.975165   13399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0925 10:34:11.052671   13399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0925 10:34:11.128812   13399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0925 10:34:11.138036   13399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 10:34:11.151180   13399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0925 10:34:11.151228   13399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:34:11.159211   13399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0925 10:34:11.159259   13399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:34:11.167227   13399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:34:11.175161   13399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:34:11.183023   13399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 10:34:11.190472   13399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 10:34:11.197103   13399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 10:34:11.203775   13399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 10:34:11.279409   13399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0925 10:34:11.374520   13399 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0925 10:34:11.374582   13399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0925 10:34:11.378047   13399 start.go:537] Will wait 60s for crictl version
	I0925 10:34:11.378111   13399 ssh_runner.go:195] Run: which crictl
	I0925 10:34:11.381856   13399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 10:34:11.412863   13399 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0925 10:34:11.412963   13399 ssh_runner.go:195] Run: crio --version
	I0925 10:34:11.444125   13399 ssh_runner.go:195] Run: crio --version
	I0925 10:34:11.476314   13399 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I0925 10:34:11.477717   13399 cli_runner.go:164] Run: docker network inspect addons-440446 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0925 10:34:11.492647   13399 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0925 10:34:11.495837   13399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 10:34:11.504866   13399 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:34:11.504930   13399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0925 10:34:11.551459   13399 crio.go:496] all images are preloaded for cri-o runtime.
	I0925 10:34:11.551479   13399 crio.go:415] Images already preloaded, skipping extraction
	I0925 10:34:11.551522   13399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0925 10:34:11.580792   13399 crio.go:496] all images are preloaded for cri-o runtime.
	I0925 10:34:11.580811   13399 cache_images.go:84] Images are preloaded, skipping loading
	I0925 10:34:11.580861   13399 ssh_runner.go:195] Run: crio config
	I0925 10:34:11.618887   13399 cni.go:84] Creating CNI manager for ""
	I0925 10:34:11.618915   13399 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0925 10:34:11.618938   13399 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 10:34:11.618967   13399 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-440446 NodeName:addons-440446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 10:34:11.619119   13399 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-440446"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 10:34:11.619206   13399 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-440446 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-440446 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 10:34:11.619265   13399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 10:34:11.627029   13399 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 10:34:11.627080   13399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 10:34:11.634195   13399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0925 10:34:11.647922   13399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 10:34:11.661595   13399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0925 10:34:11.675573   13399 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0925 10:34:11.678446   13399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 10:34:11.687362   13399 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446 for IP: 192.168.49.2
	I0925 10:34:11.687401   13399 certs.go:190] acquiring lock for shared ca certs: {Name:mk1dc4321044392bda6d0b04ee5f4e5cca314d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:11.687514   13399 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key
	I0925 10:34:11.968178   13399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt ...
	I0925 10:34:11.968208   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt: {Name:mkf2f841c296564c14fa7ff66122972ae154df86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:11.968367   13399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key ...
	I0925 10:34:11.968378   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key: {Name:mk819ba2347b4824b6e2454c0c4b86fdae563e14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:11.968443   13399 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key
	I0925 10:34:12.101114   13399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt ...
	I0925 10:34:12.101140   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt: {Name:mke4493f7caa3095bdadbb1993a9e2f8fb58fde5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:12.101279   13399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key ...
	I0925 10:34:12.101288   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key: {Name:mk3da005c3ef8aec80dab8df10bfd5f0342994b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:12.101384   13399 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.key
	I0925 10:34:12.101396   13399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt with IP's: []
	I0925 10:34:12.237928   13399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt ...
	I0925 10:34:12.237958   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: {Name:mkb616af0158fad6b4f8f299104e17d293409e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:12.238118   13399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.key ...
	I0925 10:34:12.238128   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.key: {Name:mk99d8caf164a96144090fbc5242683d87e9ea82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:12.238189   13399 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.key.dd3b5fb2
	I0925 10:34:12.238205   13399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 10:34:12.372788   13399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.crt.dd3b5fb2 ...
	I0925 10:34:12.372814   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.crt.dd3b5fb2: {Name:mk651af67d017177d9c0ec49e0d6bfb7b26bf18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:12.372960   13399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.key.dd3b5fb2 ...
	I0925 10:34:12.372971   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.key.dd3b5fb2: {Name:mk6cce39ea944aaa348ea1d3e80e4b80f0199947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:12.373034   13399 certs.go:337] copying /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.crt
	I0925 10:34:12.373095   13399 certs.go:341] copying /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.key
	I0925 10:34:12.373137   13399 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/proxy-client.key
	I0925 10:34:12.373152   13399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/proxy-client.crt with IP's: []
	I0925 10:34:12.738283   13399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/proxy-client.crt ...
	I0925 10:34:12.738322   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/proxy-client.crt: {Name:mkbb1df320841bb42fd4c9918d3020b20e69e997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:12.738508   13399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/proxy-client.key ...
	I0925 10:34:12.738522   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/proxy-client.key: {Name:mk382c05c276f04a18932b172ad6c9537659ceb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:12.738692   13399 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 10:34:12.738730   13399 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem (1078 bytes)
	I0925 10:34:12.738759   13399 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem (1123 bytes)
	I0925 10:34:12.738786   13399 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem (1675 bytes)
	I0925 10:34:12.739312   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 10:34:12.759938   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 10:34:12.779318   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 10:34:12.799159   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 10:34:12.818426   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 10:34:12.837455   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 10:34:12.856711   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 10:34:12.876197   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 10:34:12.895645   13399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 10:34:12.915120   13399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 10:34:12.929485   13399 ssh_runner.go:195] Run: openssl version
	I0925 10:34:12.934395   13399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 10:34:12.942137   13399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:34:12.944983   13399 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:34:12.945027   13399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:34:12.950740   13399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 10:34:12.958244   13399 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 10:34:12.960863   13399 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 10:34:12.960909   13399 kubeadm.go:404] StartCluster: {Name:addons-440446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-440446 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:34:12.960992   13399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0925 10:34:12.961037   13399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0925 10:34:12.990552   13399 cri.go:89] found id: ""
	I0925 10:34:12.990612   13399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 10:34:12.997933   13399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 10:34:13.005178   13399 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0925 10:34:13.005228   13399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 10:34:13.012142   13399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 10:34:13.012191   13399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0925 10:34:13.053776   13399 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 10:34:13.053830   13399 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 10:34:13.086220   13399 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0925 10:34:13.086299   13399 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1042-gcp
	I0925 10:34:13.086348   13399 kubeadm.go:322] OS: Linux
	I0925 10:34:13.086441   13399 kubeadm.go:322] CGROUPS_CPU: enabled
	I0925 10:34:13.086532   13399 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0925 10:34:13.086604   13399 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0925 10:34:13.086667   13399 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0925 10:34:13.086728   13399 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0925 10:34:13.086797   13399 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0925 10:34:13.086862   13399 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0925 10:34:13.086925   13399 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0925 10:34:13.087012   13399 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0925 10:34:13.143710   13399 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 10:34:13.143832   13399 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 10:34:13.143910   13399 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 10:34:13.325194   13399 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 10:34:13.328806   13399 out.go:204]   - Generating certificates and keys ...
	I0925 10:34:13.328934   13399 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 10:34:13.329030   13399 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 10:34:13.535170   13399 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 10:34:13.698623   13399 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 10:34:13.804117   13399 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 10:34:13.881429   13399 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 10:34:14.177749   13399 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 10:34:14.177903   13399 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-440446 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0925 10:34:14.298727   13399 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 10:34:14.298863   13399 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-440446 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0925 10:34:14.394304   13399 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 10:34:14.658447   13399 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 10:34:14.858829   13399 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 10:34:14.858928   13399 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 10:34:15.151281   13399 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 10:34:15.229379   13399 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 10:34:15.295798   13399 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 10:34:15.448178   13399 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 10:34:15.448689   13399 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 10:34:15.450880   13399 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 10:34:15.453040   13399 out.go:204]   - Booting up control plane ...
	I0925 10:34:15.453158   13399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 10:34:15.453277   13399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 10:34:15.453372   13399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 10:34:15.460517   13399 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 10:34:15.462205   13399 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 10:34:15.462275   13399 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 10:34:15.536497   13399 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 10:34:20.538126   13399 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001682 seconds
	I0925 10:34:20.538270   13399 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 10:34:20.548509   13399 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 10:34:21.067116   13399 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 10:34:21.067314   13399 kubeadm.go:322] [mark-control-plane] Marking the node addons-440446 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 10:34:21.576974   13399 kubeadm.go:322] [bootstrap-token] Using token: szh64i.1mhxdkmfhsjp9upk
	I0925 10:34:21.578801   13399 out.go:204]   - Configuring RBAC rules ...
	I0925 10:34:21.578923   13399 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 10:34:21.582068   13399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 10:34:21.587470   13399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 10:34:21.589927   13399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 10:34:21.592229   13399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 10:34:21.595387   13399 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 10:34:21.605153   13399 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 10:34:21.811372   13399 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 10:34:21.986887   13399 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 10:34:21.988057   13399 kubeadm.go:322] 
	I0925 10:34:21.988145   13399 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 10:34:21.988158   13399 kubeadm.go:322] 
	I0925 10:34:21.988246   13399 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 10:34:21.988253   13399 kubeadm.go:322] 
	I0925 10:34:21.988290   13399 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 10:34:21.988374   13399 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 10:34:21.988457   13399 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 10:34:21.988479   13399 kubeadm.go:322] 
	I0925 10:34:21.988569   13399 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 10:34:21.988579   13399 kubeadm.go:322] 
	I0925 10:34:21.988674   13399 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 10:34:21.988694   13399 kubeadm.go:322] 
	I0925 10:34:21.988777   13399 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 10:34:21.988895   13399 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 10:34:21.988998   13399 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 10:34:21.989009   13399 kubeadm.go:322] 
	I0925 10:34:21.989135   13399 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 10:34:21.989244   13399 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 10:34:21.989256   13399 kubeadm.go:322] 
	I0925 10:34:21.989365   13399 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token szh64i.1mhxdkmfhsjp9upk \
	I0925 10:34:21.989514   13399 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 \
	I0925 10:34:21.989549   13399 kubeadm.go:322] 	--control-plane 
	I0925 10:34:21.989566   13399 kubeadm.go:322] 
	I0925 10:34:21.989688   13399 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 10:34:21.989701   13399 kubeadm.go:322] 
	I0925 10:34:21.989809   13399 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token szh64i.1mhxdkmfhsjp9upk \
	I0925 10:34:21.989936   13399 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 
	I0925 10:34:21.991450   13399 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1042-gcp\n", err: exit status 1
	I0925 10:34:21.991620   13399 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 10:34:21.991650   13399 cni.go:84] Creating CNI manager for ""
	I0925 10:34:21.991663   13399 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0925 10:34:21.993483   13399 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0925 10:34:21.994722   13399 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0925 10:34:21.997970   13399 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0925 10:34:21.997984   13399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0925 10:34:22.012840   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0925 10:34:22.621259   13399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 10:34:22.621335   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:22.621377   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=addons-440446 minikube.k8s.io/updated_at=2023_09_25T10_34_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:22.627652   13399 ops.go:34] apiserver oom_adj: -16
	I0925 10:34:22.693209   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:22.756902   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:23.317792   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:23.817463   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:24.317319   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:24.817870   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:25.317848   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:25.818027   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:26.317702   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:26.817544   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:27.317333   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:27.817974   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:28.317247   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:28.817604   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:29.317951   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:29.817803   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:30.317839   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:30.817464   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:31.317842   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:31.817830   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:32.317883   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:32.818206   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:33.317580   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:33.817457   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:34.317798   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:34.817214   13399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:34:34.879813   13399 kubeadm.go:1081] duration metric: took 12.25852207s to wait for elevateKubeSystemPrivileges.
	I0925 10:34:34.879843   13399 kubeadm.go:406] StartCluster complete in 21.918937086s
	I0925 10:34:34.879886   13399 settings.go:142] acquiring lock: {Name:mk1ac20708e0ba811b0d8618989be560267b849d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:34.879995   13399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:34:34.880488   13399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/kubeconfig: {Name:mkcd9251a91cb443db17b5c9d69f4674dad74ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:34:34.880862   13399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 10:34:34.880864   13399 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0925 10:34:34.880935   13399 addons.go:69] Setting cloud-spanner=true in profile "addons-440446"
	I0925 10:34:34.880948   13399 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-440446"
	I0925 10:34:34.880960   13399 addons.go:231] Setting addon cloud-spanner=true in "addons-440446"
	I0925 10:34:34.880979   13399 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-440446"
	I0925 10:34:34.881001   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.881009   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.881087   13399 addons.go:69] Setting default-storageclass=true in profile "addons-440446"
	I0925 10:34:34.880942   13399 addons.go:69] Setting volumesnapshots=true in profile "addons-440446"
	I0925 10:34:34.881104   13399 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-440446"
	I0925 10:34:34.881117   13399 addons.go:231] Setting addon volumesnapshots=true in "addons-440446"
	I0925 10:34:34.881164   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.881342   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.881423   13399 addons.go:69] Setting ingress-dns=true in profile "addons-440446"
	I0925 10:34:34.881462   13399 addons.go:231] Setting addon ingress-dns=true in "addons-440446"
	I0925 10:34:34.881500   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.881509   13399 addons.go:69] Setting metrics-server=true in profile "addons-440446"
	I0925 10:34:34.881524   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.881534   13399 addons.go:231] Setting addon metrics-server=true in "addons-440446"
	I0925 10:34:34.881554   13399 addons.go:69] Setting gcp-auth=true in profile "addons-440446"
	I0925 10:34:34.881583   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.881597   13399 addons.go:69] Setting inspektor-gadget=true in profile "addons-440446"
	I0925 10:34:34.881614   13399 addons.go:231] Setting addon inspektor-gadget=true in "addons-440446"
	I0925 10:34:34.881629   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.881659   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.881987   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.882013   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.882101   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.882260   13399 addons.go:69] Setting ingress=true in profile "addons-440446"
	I0925 10:34:34.882286   13399 addons.go:231] Setting addon ingress=true in "addons-440446"
	I0925 10:34:34.882337   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.882791   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.881085   13399 config.go:182] Loaded profile config "addons-440446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:34:34.883534   13399 addons.go:69] Setting registry=true in profile "addons-440446"
	I0925 10:34:34.883549   13399 addons.go:231] Setting addon registry=true in "addons-440446"
	I0925 10:34:34.883597   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.883626   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.883823   13399 addons.go:69] Setting storage-provisioner=true in profile "addons-440446"
	I0925 10:34:34.883840   13399 addons.go:231] Setting addon storage-provisioner=true in "addons-440446"
	I0925 10:34:34.883877   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.884305   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.884539   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.881527   13399 addons.go:69] Setting helm-tiller=true in profile "addons-440446"
	I0925 10:34:34.886927   13399 addons.go:231] Setting addon helm-tiller=true in "addons-440446"
	I0925 10:34:34.886985   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.887436   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.881584   13399 mustload.go:65] Loading cluster: addons-440446
	I0925 10:34:34.887826   13399 config.go:182] Loaded profile config "addons-440446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:34:34.888133   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.926778   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0925 10:34:34.928516   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0925 10:34:34.930058   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0925 10:34:34.931459   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0925 10:34:34.932805   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0925 10:34:34.934341   13399 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0925 10:34:34.933532   13399 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-440446" context rescaled to 1 replicas
	I0925 10:34:34.934297   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0925 10:34:34.935759   13399 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0925 10:34:34.937286   13399 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0925 10:34:34.937300   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0925 10:34:34.937339   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.935790   13399 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0925 10:34:34.935825   13399 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0925 10:34:34.935821   13399 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0925 10:34:34.939425   13399 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0925 10:34:34.939437   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0925 10:34:34.942572   13399 out.go:177] * Verifying Kubernetes components...
	I0925 10:34:34.941029   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.941041   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0925 10:34:34.943897   13399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:34:34.944004   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.945049   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0925 10:34:34.945545   13399 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0925 10:34:34.945087   13399 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0925 10:34:34.948786   13399 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 10:34:34.948801   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 10:34:34.948848   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.947735   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0925 10:34:34.950768   13399 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 10:34:34.952080   13399 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 10:34:34.952096   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 10:34:34.952142   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.950850   13399 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0925 10:34:34.950919   13399 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0925 10:34:34.950937   13399 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0925 10:34:34.953700   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0925 10:34:34.953751   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.953935   13399 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0925 10:34:34.953946   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0925 10:34:34.953984   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.954156   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0925 10:34:34.954204   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.955781   13399 out.go:177]   - Using image docker.io/registry:2.8.1
	I0925 10:34:34.957065   13399 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0925 10:34:34.958354   13399 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0925 10:34:34.958369   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0925 10:34:34.958423   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.963808   13399 addons.go:231] Setting addon default-storageclass=true in "addons-440446"
	I0925 10:34:34.963858   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.964339   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:34.966123   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:34.972269   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:34.975365   13399 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0925 10:34:34.977197   13399 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.0
	I0925 10:34:34.978832   13399 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0925 10:34:34.985280   13399 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0925 10:34:34.985300   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0925 10:34:34.982737   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:34.985448   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:34.987497   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:34.999235   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:35.009246   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:35.017614   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:35.021011   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:35.021145   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:35.023308   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:35.023804   13399 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 10:34:35.023820   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 10:34:35.023866   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:35.027661   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:35.039487   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:35.172550   13399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 10:34:35.173436   13399 node_ready.go:35] waiting up to 6m0s for node "addons-440446" to be "Ready" ...
	I0925 10:34:35.276521   13399 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0925 10:34:35.276543   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0925 10:34:35.446703   13399 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 10:34:35.446734   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0925 10:34:35.451723   13399 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0925 10:34:35.451747   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0925 10:34:35.453720   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0925 10:34:35.458755   13399 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0925 10:34:35.458809   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0925 10:34:35.459489   13399 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0925 10:34:35.459504   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0925 10:34:35.459775   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 10:34:35.467010   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0925 10:34:35.556966   13399 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0925 10:34:35.557046   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0925 10:34:35.557743   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0925 10:34:35.558228   13399 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 10:34:35.558273   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 10:34:35.565717   13399 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0925 10:34:35.565776   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0925 10:34:35.566518   13399 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0925 10:34:35.566540   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0925 10:34:35.647523   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 10:34:35.662404   13399 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0925 10:34:35.662473   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0925 10:34:35.752361   13399 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 10:34:35.752435   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 10:34:35.758019   13399 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0925 10:34:35.758042   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0925 10:34:35.762111   13399 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0925 10:34:35.762130   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0925 10:34:35.766871   13399 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0925 10:34:35.766906   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0925 10:34:35.848100   13399 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0925 10:34:35.848126   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0925 10:34:35.866450   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0925 10:34:35.945512   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 10:34:35.964032   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0925 10:34:35.968014   13399 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0925 10:34:35.968036   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0925 10:34:36.046447   13399 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0925 10:34:36.046522   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0925 10:34:36.054269   13399 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0925 10:34:36.054344   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0925 10:34:36.161913   13399 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0925 10:34:36.161944   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0925 10:34:36.264595   13399 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0925 10:34:36.264620   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0925 10:34:36.266935   13399 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0925 10:34:36.266957   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0925 10:34:36.461952   13399 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0925 10:34:36.461977   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0925 10:34:36.653236   13399 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 10:34:36.653266   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0925 10:34:36.658959   13399 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 10:34:36.658985   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0925 10:34:37.046446   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 10:34:37.046872   13399 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0925 10:34:37.046936   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0925 10:34:37.147796   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0925 10:34:37.159543   13399 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.986944619s)
	I0925 10:34:37.159588   13399 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0925 10:34:37.259357   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:37.358894   13399 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0925 10:34:37.358974   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0925 10:34:37.661943   13399 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0925 10:34:37.662022   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0925 10:34:37.949890   13399 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0925 10:34:37.949968   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0925 10:34:38.258658   13399 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 10:34:38.258731   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0925 10:34:38.368840   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0925 10:34:39.758891   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:40.165188   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.711429687s)
	I0925 10:34:40.165890   13399 addons.go:467] Verifying addon ingress=true in "addons-440446"
	I0925 10:34:40.167342   13399 out.go:177] * Verifying ingress addon...
	I0925 10:34:40.165347   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.705551255s)
	I0925 10:34:40.165410   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.698343035s)
	I0925 10:34:40.165452   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.607656794s)
	I0925 10:34:40.165498   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.517900823s)
	I0925 10:34:40.165542   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.299023465s)
	I0925 10:34:40.165638   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.220044526s)
	I0925 10:34:40.165683   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.201572161s)
	I0925 10:34:40.165782   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.119300907s)
	I0925 10:34:40.165837   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.018007389s)
	I0925 10:34:40.167409   13399 addons.go:467] Verifying addon registry=true in "addons-440446"
	I0925 10:34:40.169872   13399 out.go:177] * Verifying registry addon...
	W0925 10:34:40.167512   13399 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0925 10:34:40.167531   13399 addons.go:467] Verifying addon metrics-server=true in "addons-440446"
	I0925 10:34:40.170672   13399 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0925 10:34:40.171137   13399 retry.go:31] will retry after 244.177943ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0925 10:34:40.171984   13399 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0925 10:34:40.176058   13399 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 10:34:40.176073   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:40.176167   13399 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0925 10:34:40.176183   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:40.178786   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:40.178982   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:40.416177   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0925 10:34:40.682610   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:40.682777   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:41.152983   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.784042169s)
	I0925 10:34:41.153021   13399 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-440446"
	I0925 10:34:41.154976   13399 out.go:177] * Verifying csi-hostpath-driver addon...
	I0925 10:34:41.158365   13399 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0925 10:34:41.162501   13399 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 10:34:41.162524   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:41.166212   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:41.182628   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:41.182760   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:41.500746   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.084527127s)
	I0925 10:34:41.669935   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:41.682521   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:41.682654   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:41.771620   13399 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0925 10:34:41.771678   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:41.789700   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:41.964658   13399 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0925 10:34:42.065579   13399 addons.go:231] Setting addon gcp-auth=true in "addons-440446"
	I0925 10:34:42.065632   13399 host.go:66] Checking if "addons-440446" exists ...
	I0925 10:34:42.066135   13399 cli_runner.go:164] Run: docker container inspect addons-440446 --format={{.State.Status}}
	I0925 10:34:42.083513   13399 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0925 10:34:42.083555   13399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-440446
	I0925 10:34:42.098655   13399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/addons-440446/id_rsa Username:docker}
	I0925 10:34:42.170559   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:42.249569   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:42.250197   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:42.257362   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:42.450054   13399 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0925 10:34:42.451777   13399 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0925 10:34:42.453182   13399 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0925 10:34:42.453199   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0925 10:34:42.552281   13399 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0925 10:34:42.552307   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0925 10:34:42.650524   13399 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 10:34:42.650547   13399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0925 10:34:42.745685   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:42.747128   13399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0925 10:34:42.750708   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:42.751154   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:43.248993   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:43.249509   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:43.249966   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:43.672296   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:43.751339   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:43.752268   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:44.171110   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:44.248809   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:44.249046   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:44.258562   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:44.670744   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:44.749197   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:44.749880   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:45.159870   13399 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.412699478s)
	I0925 10:34:45.160780   13399 addons.go:467] Verifying addon gcp-auth=true in "addons-440446"
	I0925 10:34:45.162365   13399 out.go:177] * Verifying gcp-auth addon...
	I0925 10:34:45.165374   13399 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0925 10:34:45.168310   13399 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0925 10:34:45.168329   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:45.171830   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:45.174195   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:45.251658   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:45.255456   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:45.671872   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:45.674852   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:45.749286   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:45.750377   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:46.170315   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:46.174829   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:46.182910   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:46.183172   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:46.670654   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:46.674906   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:46.683101   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:46.683313   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:46.758395   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:47.170821   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:47.175737   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:47.183291   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:47.183391   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:47.670522   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:47.674719   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:47.682508   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:47.682776   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:48.169945   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:48.175216   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:48.182100   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:48.182234   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:48.670923   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:48.675207   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:48.683482   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:48.683692   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:49.171358   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:49.174462   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:49.182128   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:49.182896   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:49.257474   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:49.670554   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:49.674886   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:49.683007   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:49.683300   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:50.170114   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:50.175029   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:50.183132   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:50.183365   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:50.670570   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:50.674817   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:50.683018   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:50.683271   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:51.170766   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:51.175030   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:51.182225   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:51.182282   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:51.257897   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:51.670337   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:51.674389   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:51.682142   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:51.682306   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:52.170853   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:52.174633   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:52.183221   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:52.183614   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:52.670404   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:52.674249   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:52.682031   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:52.682074   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:53.170364   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:53.175285   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:53.182226   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:53.182412   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:53.670536   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:53.674598   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:53.682649   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:53.682679   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:53.757020   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:54.170664   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:54.174616   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:54.182556   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:54.182588   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:54.670762   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:54.674797   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:54.682233   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:54.682527   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:55.170423   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:55.174458   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:55.182383   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:55.182386   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:55.670658   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:55.674633   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:55.682456   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:55.682499   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:55.757961   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:56.170666   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:56.174699   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:56.184440   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:56.184551   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:56.670152   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:56.674233   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:56.682262   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:56.682452   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:57.170626   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:57.174739   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:57.182706   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:57.182749   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:57.669960   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:57.675147   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:57.681714   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:57.681845   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:58.169760   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:58.174683   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:58.182658   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:58.182685   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:58.257332   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:34:58.670059   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:58.675328   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:58.682364   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:58.682501   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:59.170705   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:59.174813   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:59.182808   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:59.182872   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:34:59.669912   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:34:59.675053   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:34:59.682758   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:34:59.682809   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:00.170270   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:00.174454   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:00.182710   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:00.182777   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:00.257381   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:35:00.670000   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:00.674929   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:00.682655   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:00.682995   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:01.169918   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:01.174863   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:01.182909   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:01.183022   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:01.670507   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:01.674589   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:01.683307   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:01.683680   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:02.170168   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:02.175261   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:02.182241   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:02.182447   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:02.257676   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:35:02.670198   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:02.674226   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:02.681788   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:02.681961   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:03.169695   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:03.176400   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:03.182013   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:03.182129   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:03.670014   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:03.675041   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:03.682056   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:03.682250   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:04.170143   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:04.175205   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:04.181938   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:04.181999   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:04.259386   13399 node_ready.go:58] node "addons-440446" has status "Ready":"False"
	I0925 10:35:04.670213   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:04.675294   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:04.681868   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:04.682078   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:05.170654   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:05.174691   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:05.182731   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:05.183007   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:05.669900   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:05.674928   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:05.682857   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:05.682909   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:06.172071   13399 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0925 10:35:06.172095   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:06.174197   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:06.182403   13399 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0925 10:35:06.182423   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:06.182913   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:06.257849   13399 node_ready.go:49] node "addons-440446" has status "Ready":"True"
	I0925 10:35:06.257876   13399 node_ready.go:38] duration metric: took 31.084416198s waiting for node "addons-440446" to be "Ready" ...
	I0925 10:35:06.257886   13399 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 10:35:06.267213   13399 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rtgtj" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:06.672299   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:06.675162   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:06.683278   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:06.683394   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:07.172433   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:07.249826   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:07.253379   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:07.254377   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:07.358950   13399 pod_ready.go:92] pod "coredns-5dd5756b68-rtgtj" in "kube-system" namespace has status "Ready":"True"
	I0925 10:35:07.359026   13399 pod_ready.go:81] duration metric: took 1.091788213s waiting for pod "coredns-5dd5756b68-rtgtj" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.359067   13399 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-440446" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.364267   13399 pod_ready.go:92] pod "etcd-addons-440446" in "kube-system" namespace has status "Ready":"True"
	I0925 10:35:07.364286   13399 pod_ready.go:81] duration metric: took 5.187888ms waiting for pod "etcd-addons-440446" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.364299   13399 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-440446" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.369517   13399 pod_ready.go:92] pod "kube-apiserver-addons-440446" in "kube-system" namespace has status "Ready":"True"
	I0925 10:35:07.369536   13399 pod_ready.go:81] duration metric: took 5.229427ms waiting for pod "kube-apiserver-addons-440446" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.369547   13399 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-440446" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.458979   13399 pod_ready.go:92] pod "kube-controller-manager-addons-440446" in "kube-system" namespace has status "Ready":"True"
	I0925 10:35:07.459003   13399 pod_ready.go:81] duration metric: took 89.446497ms waiting for pod "kube-controller-manager-addons-440446" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.459018   13399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rpctb" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.671522   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:07.675322   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:07.747059   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:07.747161   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:07.858796   13399 pod_ready.go:92] pod "kube-proxy-rpctb" in "kube-system" namespace has status "Ready":"True"
	I0925 10:35:07.858818   13399 pod_ready.go:81] duration metric: took 399.792732ms waiting for pod "kube-proxy-rpctb" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:07.858831   13399 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-440446" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:08.172068   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:08.174731   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:08.182915   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:08.183222   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:08.258909   13399 pod_ready.go:92] pod "kube-scheduler-addons-440446" in "kube-system" namespace has status "Ready":"True"
	I0925 10:35:08.258946   13399 pod_ready.go:81] duration metric: took 400.10573ms waiting for pod "kube-scheduler-addons-440446" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:08.258959   13399 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:08.672834   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:08.675379   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:08.682920   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:08.683187   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:09.171420   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:09.175168   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:09.183745   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:09.183925   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:09.672952   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:09.675227   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:09.682966   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:09.683170   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:10.171519   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:10.175452   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:10.183305   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:10.183365   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:10.564959   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:10.671526   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:10.675454   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:10.682477   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:10.683321   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:11.172462   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:11.175167   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:11.184839   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:11.184960   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:11.672293   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:11.678023   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:11.683573   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:11.683646   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:12.171844   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:12.174508   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:12.182516   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:12.182771   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:12.672062   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:12.674808   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:12.684151   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:12.685046   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:13.063549   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:13.170722   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:13.174321   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:13.182549   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:13.182836   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:13.670873   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:13.675330   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:13.683087   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:13.683378   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:14.171640   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:14.175558   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:14.183500   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:14.183671   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:14.671268   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:14.676501   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:14.682814   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:14.683048   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:15.064211   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:15.171585   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:15.175752   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:15.183914   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:15.184592   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:15.671354   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:15.674704   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:15.683464   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:15.683567   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:16.170928   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:16.174582   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:16.183151   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:16.183259   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:16.672009   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:16.674974   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:16.684129   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:16.684301   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:17.064943   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:17.171900   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:17.174833   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:17.183691   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:17.183857   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:17.671950   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:17.674280   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:17.682780   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:17.683096   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:18.172692   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:18.175845   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:18.248628   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:18.248825   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:18.672529   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:18.675442   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:18.683166   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:18.684220   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:19.171831   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:19.174461   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:19.182839   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:19.182879   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:19.564225   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:19.671617   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:19.675233   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:19.682159   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:19.682197   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:20.171456   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:20.175001   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:20.183719   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:20.184291   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:20.671964   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:20.674912   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:20.683277   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:20.683328   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:21.173073   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:21.174756   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:21.182704   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:21.182913   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:21.671115   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:21.674251   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:21.682645   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:21.682916   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:22.065433   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:22.171730   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:22.174457   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:22.183246   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:22.183272   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:22.671506   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:22.675443   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:22.683128   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:22.683813   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:23.171668   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:23.175474   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:23.182734   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:23.182849   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:23.728946   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:23.729974   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:23.730248   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:23.731380   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:24.172246   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:24.175066   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:24.183815   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:24.183890   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:24.564443   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:24.672201   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:24.675043   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:24.683443   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:24.683672   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:25.171509   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:25.175103   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:25.183933   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:25.183989   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:25.672330   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:25.675077   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:25.683496   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:25.683725   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:26.170685   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:26.175387   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:26.182875   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:26.183149   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:26.671528   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:26.674938   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:26.682756   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:26.682842   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:27.063768   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:27.170791   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:27.174388   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:27.182452   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:27.182459   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0925 10:35:27.673107   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:27.747370   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:27.748787   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:27.749016   13399 kapi.go:107] duration metric: took 47.577030045s to wait for kubernetes.io/minikube-addons=registry ...
	I0925 10:35:28.172229   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:28.174794   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:28.183114   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:28.671753   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:28.674781   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:28.683013   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:29.064548   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:29.170696   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:29.174258   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:29.183160   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:29.671931   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:29.674416   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:29.682706   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:30.171093   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:30.176623   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:30.182881   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:30.671480   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:30.675303   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:30.683125   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:31.065194   13399 pod_ready.go:102] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"False"
	I0925 10:35:31.251489   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:31.252104   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:31.253998   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:31.672798   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:31.758310   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:31.760399   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:32.171863   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:32.175014   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:32.249701   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:32.646672   13399 pod_ready.go:92] pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace has status "Ready":"True"
	I0925 10:35:32.646773   13399 pod_ready.go:81] duration metric: took 24.387804095s waiting for pod "metrics-server-7c66d45ddc-gf64x" in "kube-system" namespace to be "Ready" ...
	I0925 10:35:32.646835   13399 pod_ready.go:38] duration metric: took 26.388930822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 10:35:32.646873   13399 api_server.go:52] waiting for apiserver process to appear ...
	I0925 10:35:32.646949   13399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 10:35:32.667052   13399 api_server.go:72] duration metric: took 57.726052224s to wait for apiserver process to appear ...
	I0925 10:35:32.667088   13399 api_server.go:88] waiting for apiserver healthz status ...
	I0925 10:35:32.667106   13399 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0925 10:35:32.751705   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:32.752445   13399 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0925 10:35:32.754259   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:32.754686   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:32.755266   13399 api_server.go:141] control plane version: v1.28.2
	I0925 10:35:32.755288   13399 api_server.go:131] duration metric: took 88.192553ms to wait for apiserver health ...
	I0925 10:35:32.755298   13399 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 10:35:32.766114   13399 system_pods.go:59] 18 kube-system pods found
	I0925 10:35:32.766197   13399 system_pods.go:61] "coredns-5dd5756b68-rtgtj" [d74f2067-ecbb-402f-ae47-3e79611df723] Running
	I0925 10:35:32.766217   13399 system_pods.go:61] "csi-hostpath-attacher-0" [dd72490e-1d71-4333-a107-aaa79a76aeb5] Running
	I0925 10:35:32.766231   13399 system_pods.go:61] "csi-hostpath-resizer-0" [4a9894b8-5914-49d0-b645-d1b523cfe150] Running
	I0925 10:35:32.766265   13399 system_pods.go:61] "csi-hostpathplugin-2lqqg" [e20f2dcc-625d-4abd-8db4-bf46d01177ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0925 10:35:32.766292   13399 system_pods.go:61] "etcd-addons-440446" [c6ec8f83-da83-4b24-9933-2cf17a304cd6] Running
	I0925 10:35:32.766308   13399 system_pods.go:61] "kindnet-8j4r4" [efeac0ea-ac85-43d5-a62a-db72d0295ac6] Running
	I0925 10:35:32.766324   13399 system_pods.go:61] "kube-apiserver-addons-440446" [84886187-4e99-4763-b213-91a7ca97d641] Running
	I0925 10:35:32.766340   13399 system_pods.go:61] "kube-controller-manager-addons-440446" [9f105e7d-da2a-46a3-907f-00d2b63904fb] Running
	I0925 10:35:32.766366   13399 system_pods.go:61] "kube-ingress-dns-minikube" [f8c32fc4-78a1-4124-a57d-171517f70e26] Running
	I0925 10:35:32.766387   13399 system_pods.go:61] "kube-proxy-rpctb" [fa22ebcd-47ef-4a7b-9243-0eedafcb487e] Running
	I0925 10:35:32.766408   13399 system_pods.go:61] "kube-scheduler-addons-440446" [bc59998b-27a6-421c-9d4b-e4706fd9fe82] Running
	I0925 10:35:32.766425   13399 system_pods.go:61] "metrics-server-7c66d45ddc-gf64x" [34885621-909d-433c-8a32-f7e24616c562] Running
	I0925 10:35:32.766441   13399 system_pods.go:61] "registry-4x2ds" [16f0fa7e-a090-4949-8aaa-1a67f930d55d] Running
	I0925 10:35:32.766468   13399 system_pods.go:61] "registry-proxy-44tvq" [5b8babe6-b83c-4179-92ea-4500aa2dddfb] Running
	I0925 10:35:32.766490   13399 system_pods.go:61] "snapshot-controller-58dbcc7b99-nhwvz" [ea2bad45-0094-46cd-9a91-84a282a9dff8] Running
	I0925 10:35:32.766507   13399 system_pods.go:61] "snapshot-controller-58dbcc7b99-scphb" [23fc8d0a-6eac-4d61-8f71-e72aba709dc7] Running
	I0925 10:35:32.766523   13399 system_pods.go:61] "storage-provisioner" [4e737c9e-7dc3-4a19-800f-2758d3b8c4e3] Running
	I0925 10:35:32.766538   13399 system_pods.go:61] "tiller-deploy-7b677967b9-fp8ss" [8008ff6a-2c21-487b-927e-dcbe79881038] Running
	I0925 10:35:32.766563   13399 system_pods.go:74] duration metric: took 11.248605ms to wait for pod list to return data ...
	I0925 10:35:32.766588   13399 default_sa.go:34] waiting for default service account to be created ...
	I0925 10:35:32.768842   13399 default_sa.go:45] found service account: "default"
	I0925 10:35:32.768862   13399 default_sa.go:55] duration metric: took 2.257733ms for default service account to be created ...
	I0925 10:35:32.768871   13399 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 10:35:32.851985   13399 system_pods.go:86] 18 kube-system pods found
	I0925 10:35:32.852014   13399 system_pods.go:89] "coredns-5dd5756b68-rtgtj" [d74f2067-ecbb-402f-ae47-3e79611df723] Running
	I0925 10:35:32.852023   13399 system_pods.go:89] "csi-hostpath-attacher-0" [dd72490e-1d71-4333-a107-aaa79a76aeb5] Running
	I0925 10:35:32.852029   13399 system_pods.go:89] "csi-hostpath-resizer-0" [4a9894b8-5914-49d0-b645-d1b523cfe150] Running
	I0925 10:35:32.852040   13399 system_pods.go:89] "csi-hostpathplugin-2lqqg" [e20f2dcc-625d-4abd-8db4-bf46d01177ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0925 10:35:32.852054   13399 system_pods.go:89] "etcd-addons-440446" [c6ec8f83-da83-4b24-9933-2cf17a304cd6] Running
	I0925 10:35:32.852061   13399 system_pods.go:89] "kindnet-8j4r4" [efeac0ea-ac85-43d5-a62a-db72d0295ac6] Running
	I0925 10:35:32.852068   13399 system_pods.go:89] "kube-apiserver-addons-440446" [84886187-4e99-4763-b213-91a7ca97d641] Running
	I0925 10:35:32.852076   13399 system_pods.go:89] "kube-controller-manager-addons-440446" [9f105e7d-da2a-46a3-907f-00d2b63904fb] Running
	I0925 10:35:32.852086   13399 system_pods.go:89] "kube-ingress-dns-minikube" [f8c32fc4-78a1-4124-a57d-171517f70e26] Running
	I0925 10:35:32.852093   13399 system_pods.go:89] "kube-proxy-rpctb" [fa22ebcd-47ef-4a7b-9243-0eedafcb487e] Running
	I0925 10:35:32.852135   13399 system_pods.go:89] "kube-scheduler-addons-440446" [bc59998b-27a6-421c-9d4b-e4706fd9fe82] Running
	I0925 10:35:32.852152   13399 system_pods.go:89] "metrics-server-7c66d45ddc-gf64x" [34885621-909d-433c-8a32-f7e24616c562] Running
	I0925 10:35:32.852161   13399 system_pods.go:89] "registry-4x2ds" [16f0fa7e-a090-4949-8aaa-1a67f930d55d] Running
	I0925 10:35:32.852168   13399 system_pods.go:89] "registry-proxy-44tvq" [5b8babe6-b83c-4179-92ea-4500aa2dddfb] Running
	I0925 10:35:32.852178   13399 system_pods.go:89] "snapshot-controller-58dbcc7b99-nhwvz" [ea2bad45-0094-46cd-9a91-84a282a9dff8] Running
	I0925 10:35:32.852185   13399 system_pods.go:89] "snapshot-controller-58dbcc7b99-scphb" [23fc8d0a-6eac-4d61-8f71-e72aba709dc7] Running
	I0925 10:35:32.852194   13399 system_pods.go:89] "storage-provisioner" [4e737c9e-7dc3-4a19-800f-2758d3b8c4e3] Running
	I0925 10:35:32.852208   13399 system_pods.go:89] "tiller-deploy-7b677967b9-fp8ss" [8008ff6a-2c21-487b-927e-dcbe79881038] Running
	I0925 10:35:32.852218   13399 system_pods.go:126] duration metric: took 83.341845ms to wait for k8s-apps to be running ...
	I0925 10:35:32.852227   13399 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 10:35:32.852277   13399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:35:32.868872   13399 system_svc.go:56] duration metric: took 16.635202ms WaitForService to wait for kubelet.
	I0925 10:35:32.868901   13399 kubeadm.go:581] duration metric: took 57.927905845s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 10:35:32.868954   13399 node_conditions.go:102] verifying NodePressure condition ...
	I0925 10:35:32.873449   13399 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0925 10:35:32.873481   13399 node_conditions.go:123] node cpu capacity is 8
	I0925 10:35:32.873493   13399 node_conditions.go:105] duration metric: took 4.533206ms to run NodePressure ...
	I0925 10:35:32.873506   13399 start.go:228] waiting for startup goroutines ...
	I0925 10:35:33.172002   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:33.249643   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:33.251599   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:33.672355   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:33.675022   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:33.683581   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:34.171789   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:34.174654   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:34.183002   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:34.673244   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:34.674961   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:34.683070   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:35.171464   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:35.175220   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:35.183664   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:35.671971   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:35.674869   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:35.683630   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:36.172709   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:36.174870   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:36.182728   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:36.671657   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:36.674970   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:36.683517   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:37.171587   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:37.175244   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:37.182912   13399 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0925 10:35:37.670670   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:37.674764   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:37.682919   13399 kapi.go:107] duration metric: took 57.512244239s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0925 10:35:38.171218   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:38.174404   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0925 10:35:38.672235   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:38.675168   13399 kapi.go:107] duration metric: took 53.50979122s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0925 10:35:38.678616   13399 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-440446 cluster.
	I0925 10:35:38.682351   13399 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0925 10:35:38.684015   13399 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0925 10:35:39.171507   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:39.672438   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:40.172275   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:40.670597   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:41.173282   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:41.670777   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:42.174172   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:42.671530   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:43.171150   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:43.671529   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:44.171565   13399 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0925 10:35:44.671690   13399 kapi.go:107] duration metric: took 1m3.513327405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0925 10:35:44.673578   13399 out.go:177] * Enabled addons: inspektor-gadget, default-storageclass, ingress-dns, storage-provisioner, cloud-spanner, helm-tiller, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0925 10:35:44.676049   13399 addons.go:502] enable addons completed in 1m9.795186555s: enabled=[inspektor-gadget default-storageclass ingress-dns storage-provisioner cloud-spanner helm-tiller metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0925 10:35:44.676079   13399 start.go:233] waiting for cluster config update ...
	I0925 10:35:44.676095   13399 start.go:242] writing updated cluster config ...
	I0925 10:35:44.676312   13399 ssh_runner.go:195] Run: rm -f paused
	I0925 10:35:44.722320   13399 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 10:35:44.724000   13399 out.go:177] * Done! kubectl is now configured to use "addons-440446" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.706926626Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb" id=bbd652d4-6f31-4c60-96a3-3f857b69190b name=/runtime.v1.ImageService/PullImage
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.707738024Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c6c73892-911e-44a0-a346-5daf5b585f0c name=/runtime.v1.ImageService/ImageStatus
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.708813796Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb],Size_:28999826,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c6c73892-911e-44a0-a346-5daf5b585f0c name=/runtime.v1.ImageService/ImageStatus
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.709660019Z" level=info msg="Creating container: default/hello-world-app-5d77478584-m7dx5/hello-world-app" id=3c645291-c51c-4faa-87b0-918c60a08317 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.709763311Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.778239171Z" level=info msg="Created container 22799926bb7bd319594d537bd20d11a2d48503a7da6845d785eed3367a4e48ee: default/hello-world-app-5d77478584-m7dx5/hello-world-app" id=3c645291-c51c-4faa-87b0-918c60a08317 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.778806269Z" level=info msg="Starting container: 22799926bb7bd319594d537bd20d11a2d48503a7da6845d785eed3367a4e48ee" id=d35d59b6-fc13-4d5e-8b06-6ac1b9b2b435 name=/runtime.v1.RuntimeService/StartContainer
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.787361587Z" level=info msg="Started container" PID=9279 containerID=22799926bb7bd319594d537bd20d11a2d48503a7da6845d785eed3367a4e48ee description=default/hello-world-app-5d77478584-m7dx5/hello-world-app id=d35d59b6-fc13-4d5e-8b06-6ac1b9b2b435 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45fe4600e3a5662c7dba4c183ed5179f131669288e0d12ccfe6c89eb9377a16f
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.874787439Z" level=info msg="Removing container: 3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5" id=70ac4166-51aa-45a6-803e-31b3e79eca4a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 25 10:38:22 addons-440446 crio[950]: time="2023-09-25 10:38:22.890332954Z" level=info msg="Removed container 3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=70ac4166-51aa-45a6-803e-31b3e79eca4a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 25 10:38:24 addons-440446 crio[950]: time="2023-09-25 10:38:24.398423695Z" level=info msg="Stopping container: 27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9 (timeout: 2s)" id=f57944f1-0cb1-4624-a24b-60a4f9a710ee name=/runtime.v1.RuntimeService/StopContainer
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.406161231Z" level=warning msg="Stopping container 27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=f57944f1-0cb1-4624-a24b-60a4f9a710ee name=/runtime.v1.RuntimeService/StopContainer
	Sep 25 10:38:26 addons-440446 conmon[5415]: conmon 27a048afd87d8efa8665 <ninfo>: container 5427 exited with status 137
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.548153763Z" level=info msg="Stopped container 27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9: ingress-nginx/ingress-nginx-controller-f6b66b4b9-qmb8l/controller" id=f57944f1-0cb1-4624-a24b-60a4f9a710ee name=/runtime.v1.RuntimeService/StopContainer
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.548739612Z" level=info msg="Stopping pod sandbox: e05caface61a7c80e3194eaf0060b5c4f1022287c29a97c4b158c28a42743c49" id=0c3dccf5-b43a-49d2-a730-e1d9826c9f5b name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.551378441Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-A7WZM54LF57JFQOZ - [0:0]\n:KUBE-HP-QGE3MYT4LWXZJXNY - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-QGE3MYT4LWXZJXNY\n-X KUBE-HP-A7WZM54LF57JFQOZ\nCOMMIT\n"
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.552697788Z" level=info msg="Closing host port tcp:80"
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.552734991Z" level=info msg="Closing host port tcp:443"
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.554052142Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.554070396Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.554189491Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-f6b66b4b9-qmb8l Namespace:ingress-nginx ID:e05caface61a7c80e3194eaf0060b5c4f1022287c29a97c4b158c28a42743c49 UID:7347bdfe-b4fb-45c2-9da7-d6695437c6de NetNS:/var/run/netns/0b824ba0-2f31-40a9-96b6-f30512fee4bb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.554300596Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-f6b66b4b9-qmb8l from CNI network \"kindnet\" (type=ptp)"
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.594049910Z" level=info msg="Stopped pod sandbox: e05caface61a7c80e3194eaf0060b5c4f1022287c29a97c4b158c28a42743c49" id=0c3dccf5-b43a-49d2-a730-e1d9826c9f5b name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.884168403Z" level=info msg="Removing container: 27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9" id=1fbb9021-f427-4ddb-95c6-663580983acc name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 25 10:38:26 addons-440446 crio[950]: time="2023-09-25 10:38:26.899328248Z" level=info msg="Removed container 27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9: ingress-nginx/ingress-nginx-controller-f6b66b4b9-qmb8l/controller" id=1fbb9021-f427-4ddb-95c6-663580983acc name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	22799926bb7bd       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb                      8 seconds ago       Running             hello-world-app           0                   45fe4600e3a56       hello-world-app-5d77478584-m7dx5
	9bc84910f498a       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   cb653c3a699d6       nginx
	38aad77d7d5ca       ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c                        2 minutes ago       Running             headlamp                  0                   f2cb30e461ba6       headlamp-58b88cff49-stwnz
	165496d90be40       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   ee48eb21eee0c       gcp-auth-d4c87556c-q6q67
	d93242f873203       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   b16f9c4e5981e       ingress-nginx-admission-patch-9p7pg
	99b38665593a1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   5c163c4e64d90       ingress-nginx-admission-create-xmgcx
	500aff2324cec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   39547762ecbca       storage-provisioner
	0efaf14383447       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   df8dea70cb90b       coredns-5dd5756b68-rtgtj
	a6fe8542219fe       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             3 minutes ago       Running             kindnet-cni               0                   12de58eb831bf       kindnet-8j4r4
	5ac8d0716aef5       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                                             3 minutes ago       Running             kube-proxy                0                   e640564a36d3f       kube-proxy-rpctb
	9f6559c491eb3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   a17f140cfd917       etcd-addons-440446
	89a22f84223c2       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                                             4 minutes ago       Running             kube-apiserver            0                   49c2e270a4066       kube-apiserver-addons-440446
	35b6df80b3a69       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                                             4 minutes ago       Running             kube-controller-manager   0                   68a6048081482       kube-controller-manager-addons-440446
	3b90f1e6d315b       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                                             4 minutes ago       Running             kube-scheduler            0                   9690f4b9dcf0a       kube-scheduler-addons-440446
	
	* 
	* ==> coredns [0efaf14383447ff94e65824cff35096d4060664e6030997791ec10f7bb0094f8] <==
	* [INFO] 10.244.0.14:37634 - 12892 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088325s
	[INFO] 10.244.0.14:47763 - 4031 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004616804s
	[INFO] 10.244.0.14:47763 - 53691 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004941286s
	[INFO] 10.244.0.14:54049 - 47700 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004434759s
	[INFO] 10.244.0.14:54049 - 24409 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005863518s
	[INFO] 10.244.0.14:39083 - 2395 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005437583s
	[INFO] 10.244.0.14:39083 - 21597 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006061972s
	[INFO] 10.244.0.14:38347 - 2877 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000046633s
	[INFO] 10.244.0.14:38347 - 22323 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067719s
	[INFO] 10.244.0.18:35095 - 61024 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177733s
	[INFO] 10.244.0.18:42291 - 25934 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000225726s
	[INFO] 10.244.0.18:58118 - 10695 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096183s
	[INFO] 10.244.0.18:50066 - 6412 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118314s
	[INFO] 10.244.0.18:43408 - 48722 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000981s
	[INFO] 10.244.0.18:39343 - 33371 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122967s
	[INFO] 10.244.0.18:42917 - 32135 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007312268s
	[INFO] 10.244.0.18:59603 - 43288 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008291999s
	[INFO] 10.244.0.18:51113 - 8075 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007346383s
	[INFO] 10.244.0.18:42773 - 7137 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008885353s
	[INFO] 10.244.0.18:33282 - 11475 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005928135s
	[INFO] 10.244.0.18:46853 - 24909 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.0091122s
	[INFO] 10.244.0.18:38592 - 54801 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000649135s
	[INFO] 10.244.0.18:53368 - 41906 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00082779s
	[INFO] 10.244.0.21:42259 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000168392s
	[INFO] 10.244.0.21:45342 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000135015s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-440446
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-440446
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=addons-440446
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T10_34_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-440446
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:34:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-440446
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 10:38:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 10:36:55 +0000   Mon, 25 Sep 2023 10:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 10:36:55 +0000   Mon, 25 Sep 2023 10:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 10:36:55 +0000   Mon, 25 Sep 2023 10:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 10:36:55 +0000   Mon, 25 Sep 2023 10:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-440446
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 b78b6ac42fad44c492f3aa5740e3108d
	  System UUID:                0a1645a6-3bc0-4c7a-a4fd-8e15d2f0a084
	  Boot ID:                    a0198791-e836-4d6b-a7bd-f74954d514fc
	  Kernel Version:             5.15.0-1042-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-m7dx5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-d4c87556c-q6q67                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  headlamp                    headlamp-58b88cff49-stwnz                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 coredns-5dd5756b68-rtgtj                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m57s
	  kube-system                 etcd-addons-440446                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-8j4r4                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-addons-440446             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-addons-440446    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-rpctb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-addons-440446             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m55s  kube-proxy       
	  Normal  Starting                 4m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet          Node addons-440446 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet          Node addons-440446 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet          Node addons-440446 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m58s  node-controller  Node addons-440446 event: Registered Node addons-440446 in Controller
	  Normal  NodeReady                3m26s  kubelet          Node addons-440446 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.009109] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004217] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.866193] kauditd_printk_skb: 36 callbacks suppressed
	[Sep25 10:36] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a fe 5b 68 e1 03 6a ae 75 0d 8a 35 08 00
	[  +1.032289] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a fe 5b 68 e1 03 6a ae 75 0d 8a 35 08 00
	[  +2.011808] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 7a fe 5b 68 e1 03 6a ae 75 0d 8a 35 08 00
	[  +4.159595] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 7a fe 5b 68 e1 03 6a ae 75 0d 8a 35 08 00
	[  +8.191195] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a fe 5b 68 e1 03 6a ae 75 0d 8a 35 08 00
	[ +16.130407] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a fe 5b 68 e1 03 6a ae 75 0d 8a 35 08 00
	[Sep25 10:37] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a fe 5b 68 e1 03 6a ae 75 0d 8a 35 08 00
	
	* 
	* ==> etcd [9f6559c491eb3c0d55e469f51274cad1980cc117db14ca0575cb6341a47b54bf] <==
	* {"level":"info","ts":"2023-09-25T10:34:37.863355Z","caller":"traceutil/trace.go:171","msg":"trace[1249109094] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"196.188096ms","start":"2023-09-25T10:34:37.667148Z","end":"2023-09-25T10:34:37.863337Z","steps":["trace[1249109094] 'process raft request'  (duration: 98.365117ms)","trace[1249109094] 'compare'  (duration: 97.472947ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-25T10:34:37.863584Z","caller":"traceutil/trace.go:171","msg":"trace[1890493741] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"116.865882ms","start":"2023-09-25T10:34:37.746707Z","end":"2023-09-25T10:34:37.863572Z","steps":["trace[1890493741] 'process raft request'  (duration: 116.391694ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:37.86382Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.170962ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-09-25T10:34:37.863856Z","caller":"traceutil/trace.go:171","msg":"trace[206680313] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:388; }","duration":"114.214997ms","start":"2023-09-25T10:34:37.749626Z","end":"2023-09-25T10:34:37.863841Z","steps":["trace[206680313] 'agreement among raft nodes before linearized reading'  (duration: 114.067252ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:34:38.048307Z","caller":"traceutil/trace.go:171","msg":"trace[808196478] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"103.725954ms","start":"2023-09-25T10:34:37.851832Z","end":"2023-09-25T10:34:37.955558Z","steps":["trace[808196478] 'process raft request'  (duration: 99.898387ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:34:38.049238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.787209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-25T10:34:38.049294Z","caller":"traceutil/trace.go:171","msg":"trace[1527073025] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:389; }","duration":"188.845614ms","start":"2023-09-25T10:34:37.860425Z","end":"2023-09-25T10:34:38.04927Z","steps":["trace[1527073025] 'agreement among raft nodes before linearized reading'  (duration: 95.178101ms)","trace[1527073025] 'range keys from in-memory index tree'  (duration: 93.587707ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-25T10:34:38.049666Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"292.210302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-440446\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-09-25T10:34:38.049705Z","caller":"traceutil/trace.go:171","msg":"trace[2091842541] range","detail":"{range_begin:/registry/minions/addons-440446; range_end:; response_count:1; response_revision:389; }","duration":"292.252934ms","start":"2023-09-25T10:34:37.757438Z","end":"2023-09-25T10:34:38.049691Z","steps":["trace[2091842541] 'agreement among raft nodes before linearized reading'  (duration: 196.54237ms)","trace[2091842541] 'range keys from in-memory index tree'  (duration: 95.599524ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-25T10:35:50.43299Z","caller":"traceutil/trace.go:171","msg":"trace[1033863095] transaction","detail":"{read_only:false; response_revision:1093; number_of_response:1; }","duration":"145.646421ms","start":"2023-09-25T10:35:50.287325Z","end":"2023-09-25T10:35:50.432971Z","steps":["trace[1033863095] 'process raft request'  (duration: 145.530589ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:35:50.696392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.031464ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024035862699607 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1059 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-25T10:35:50.696516Z","caller":"traceutil/trace.go:171","msg":"trace[278653273] linearizableReadLoop","detail":"{readStateIndex:1130; appliedIndex:1128; }","duration":"199.569072ms","start":"2023-09-25T10:35:50.496937Z","end":"2023-09-25T10:35:50.696506Z","steps":["trace[278653273] 'read index received'  (duration: 60.924574ms)","trace[278653273] 'applied index is now lower than readState.Index'  (duration: 138.643841ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-25T10:35:50.696572Z","caller":"traceutil/trace.go:171","msg":"trace[2108121179] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1097; }","duration":"200.463154ms","start":"2023-09-25T10:35:50.49609Z","end":"2023-09-25T10:35:50.696553Z","steps":["trace[2108121179] 'process raft request'  (duration: 200.367683ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:35:50.696605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.388039ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/cloud-spanner-emulator-7d49f968d9-tt28p\" ","response":"range_response_count:1 size:3371"}
	{"level":"warn","ts":"2023-09-25T10:35:50.696612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.676865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/gadget.kinvolk.io/traces/\" range_end:\"/registry/gadget.kinvolk.io/traces0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-25T10:35:50.696669Z","caller":"traceutil/trace.go:171","msg":"trace[549710213] range","detail":"{range_begin:/registry/pods/default/cloud-spanner-emulator-7d49f968d9-tt28p; range_end:; response_count:1; response_revision:1097; }","duration":"102.4136ms","start":"2023-09-25T10:35:50.594207Z","end":"2023-09-25T10:35:50.69662Z","steps":["trace[549710213] 'agreement among raft nodes before linearized reading'  (duration: 102.365398ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:35:50.69668Z","caller":"traceutil/trace.go:171","msg":"trace[1985176398] range","detail":"{range_begin:/registry/gadget.kinvolk.io/traces/; range_end:/registry/gadget.kinvolk.io/traces0; response_count:0; response_revision:1097; }","duration":"199.756328ms","start":"2023-09-25T10:35:50.496913Z","end":"2023-09-25T10:35:50.696669Z","steps":["trace[1985176398] 'agreement among raft nodes before linearized reading'  (duration: 199.647502ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:35:50.696705Z","caller":"traceutil/trace.go:171","msg":"trace[1767203317] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"240.925353ms","start":"2023-09-25T10:35:50.455761Z","end":"2023-09-25T10:35:50.696687Z","steps":["trace[1767203317] 'process raft request'  (duration: 102.11119ms)","trace[1767203317] 'compare'  (duration: 137.939789ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-25T10:35:56.371296Z","caller":"traceutil/trace.go:171","msg":"trace[1398082980] linearizableReadLoop","detail":"{readStateIndex:1226; appliedIndex:1225; }","duration":"118.566289ms","start":"2023-09-25T10:35:56.252712Z","end":"2023-09-25T10:35:56.371278Z","steps":["trace[1398082980] 'read index received'  (duration: 38.34085ms)","trace[1398082980] 'applied index is now lower than readState.Index'  (duration: 80.224599ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-25T10:35:56.371395Z","caller":"traceutil/trace.go:171","msg":"trace[709336704] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"178.578721ms","start":"2023-09-25T10:35:56.192806Z","end":"2023-09-25T10:35:56.371384Z","steps":["trace[709336704] 'process raft request'  (duration: 98.287573ms)","trace[709336704] 'compare'  (duration: 80.106092ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-25T10:35:56.37149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.495105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3751"}
	{"level":"info","ts":"2023-09-25T10:35:56.371543Z","caller":"traceutil/trace.go:171","msg":"trace[1334085176] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1188; }","duration":"105.560566ms","start":"2023-09-25T10:35:56.265971Z","end":"2023-09-25T10:35:56.371532Z","steps":["trace[1334085176] 'agreement among raft nodes before linearized reading'  (duration: 105.456974ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:35:56.371643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.945019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:2002"}
	{"level":"info","ts":"2023-09-25T10:35:56.371675Z","caller":"traceutil/trace.go:171","msg":"trace[1643412296] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:1188; }","duration":"118.984795ms","start":"2023-09-25T10:35:56.252683Z","end":"2023-09-25T10:35:56.371667Z","steps":["trace[1643412296] 'agreement among raft nodes before linearized reading'  (duration: 118.915973ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-25T10:36:01.680506Z","caller":"traceutil/trace.go:171","msg":"trace[1685532601] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"100.574579ms","start":"2023-09-25T10:36:01.579908Z","end":"2023-09-25T10:36:01.680482Z","steps":["trace[1685532601] 'process raft request'  (duration: 37.029411ms)","trace[1685532601] 'compare'  (duration: 63.380358ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [165496d90be40159aa6f3a7ee6b9cfee57be8658ce04b0c861aed112f7c76b73] <==
	* 2023/09/25 10:35:38 GCP Auth Webhook started!
	2023/09/25 10:35:49 Ready to marshal response ...
	2023/09/25 10:35:49 Ready to write response ...
	2023/09/25 10:35:52 Ready to marshal response ...
	2023/09/25 10:35:52 Ready to write response ...
	2023/09/25 10:35:52 Ready to marshal response ...
	2023/09/25 10:35:52 Ready to write response ...
	2023/09/25 10:35:52 Ready to marshal response ...
	2023/09/25 10:35:52 Ready to write response ...
	2023/09/25 10:35:55 Ready to marshal response ...
	2023/09/25 10:35:55 Ready to write response ...
	2023/09/25 10:36:01 Ready to marshal response ...
	2023/09/25 10:36:01 Ready to write response ...
	2023/09/25 10:36:40 Ready to marshal response ...
	2023/09/25 10:36:40 Ready to write response ...
	2023/09/25 10:37:13 Ready to marshal response ...
	2023/09/25 10:37:13 Ready to write response ...
	2023/09/25 10:38:21 Ready to marshal response ...
	2023/09/25 10:38:21 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  10:38:31 up 21 min,  0 users,  load average: 0.36, 0.70, 0.35
	Linux addons-440446 5.15.0-1042-gcp #50~20.04.1-Ubuntu SMP Mon Sep 11 03:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [a6fe8542219feefff68d8b8d326256e42f6013bba33d1aa2ade5a80ce8a14dda] <==
	* I0925 10:36:25.767715       1 main.go:227] handling current node
	I0925 10:36:35.779738       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:36:35.779761       1 main.go:227] handling current node
	I0925 10:36:45.783058       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:36:45.783080       1 main.go:227] handling current node
	I0925 10:36:55.795105       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:36:55.795130       1 main.go:227] handling current node
	I0925 10:37:05.798978       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:37:05.799002       1 main.go:227] handling current node
	I0925 10:37:15.809709       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:37:15.809731       1 main.go:227] handling current node
	I0925 10:37:25.812851       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:37:25.812873       1 main.go:227] handling current node
	I0925 10:37:35.824062       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:37:35.824087       1 main.go:227] handling current node
	I0925 10:37:45.834188       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:37:45.834208       1 main.go:227] handling current node
	I0925 10:37:55.838178       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:37:55.838203       1 main.go:227] handling current node
	I0925 10:38:05.850065       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:38:05.850088       1 main.go:227] handling current node
	I0925 10:38:15.854150       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:38:15.854174       1 main.go:227] handling current node
	I0925 10:38:25.863981       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:38:25.864003       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [89a22f84223c27199d5527a5b1e81d97eb19618a64541ed0b199def98ef29b57] <==
	* E0925 10:35:52.822081       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.19:37068: read: connection reset by peer
	I0925 10:36:01.564310       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0925 10:36:01.985488       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.103.3"}
	I0925 10:36:33.565763       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0925 10:36:52.343035       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0925 10:37:28.956540       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 10:37:28.956598       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 10:37:28.962502       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 10:37:28.962564       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 10:37:28.970043       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 10:37:28.970085       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 10:37:28.970179       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 10:37:28.970223       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 10:37:28.979653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 10:37:28.979703       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 10:37:28.993185       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 10:37:28.993249       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 10:37:29.059305       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 10:37:29.059360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0925 10:37:29.061229       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0925 10:37:29.061264       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0925 10:37:29.971450       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0925 10:37:30.061325       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0925 10:37:30.071394       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0925 10:38:21.652721       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.130.186"}
	
	* 
	* ==> kube-controller-manager [35b6df80b3a6951415ba97dec1d8ca03eceb3da910828e236491648a5ba3a65d] <==
	* W0925 10:37:46.549594       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 10:37:46.549624       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0925 10:37:46.727295       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 10:37:46.727321       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0925 10:37:48.890526       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 10:37:48.890557       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0925 10:37:59.906096       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 10:37:59.906125       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0925 10:38:01.823514       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 10:38:01.823541       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0925 10:38:02.415762       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 10:38:02.415800       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0925 10:38:21.500871       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0925 10:38:21.509302       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-m7dx5"
	I0925 10:38:21.513430       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.709706ms"
	I0925 10:38:21.518108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="4.550054ms"
	I0925 10:38:21.518184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.824µs"
	I0925 10:38:21.523358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.287µs"
	I0925 10:38:22.889465       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.644279ms"
	I0925 10:38:22.889596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="79.677µs"
	I0925 10:38:23.389532       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0925 10:38:23.390124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-f6b66b4b9" duration="19.271µs"
	I0925 10:38:23.393327       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0925 10:38:31.733721       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0925 10:38:31.733755       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [5ac8d0716aef578f1b00b5c3877264fcbed364586fb7b136998eaadcb73703b9] <==
	* I0925 10:34:35.352127       1 server_others.go:69] "Using iptables proxy"
	I0925 10:34:35.450127       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0925 10:34:36.051709       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0925 10:34:36.065534       1 server_others.go:152] "Using iptables Proxier"
	I0925 10:34:36.144857       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0925 10:34:36.146359       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0925 10:34:36.146461       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 10:34:36.149538       1 server.go:846] "Version info" version="v1.28.2"
	I0925 10:34:36.149567       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 10:34:36.151468       1 config.go:188] "Starting service config controller"
	I0925 10:34:36.151490       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 10:34:36.151549       1 config.go:97] "Starting endpoint slice config controller"
	I0925 10:34:36.151562       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 10:34:36.152038       1 config.go:315] "Starting node config controller"
	I0925 10:34:36.152052       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 10:34:36.251798       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0925 10:34:36.252667       1 shared_informer.go:318] Caches are synced for service config
	I0925 10:34:36.252694       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3b90f1e6d315bfabd4c6e460ead07db51ca10ab410f0febf01329a1b0383734f] <==
	* W0925 10:34:18.955937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:34:18.955989       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0925 10:34:18.955998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:18.956038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:34:18.956042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 10:34:18.956044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:18.956052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0925 10:34:18.956044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:18.956091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:18.956101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:34:18.956152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0925 10:34:18.956155       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:18.956166       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 10:34:18.956172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:19.819125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 10:34:19.819154       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0925 10:34:19.824268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:19.824294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:19.910812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:34:19.910854       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:34:19.972361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 10:34:19.972394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0925 10:34:19.973245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:34:19.973265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0925 10:34:20.550670       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 25 10:38:22 addons-440446 kubelet[1557]: E0925 10:38:22.048058    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1c8e2ba68fd210be397d40f95aefd6c6e60af4cab4a52a749161deb2b4276b7b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1c8e2ba68fd210be397d40f95aefd6c6e60af4cab4a52a749161deb2b4276b7b/diff: no such file or directory, extraDiskErr: <nil>
	Sep 25 10:38:22 addons-440446 kubelet[1557]: E0925 10:38:22.052176    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8a473f5118b1ad5ec54a23fac782cfceb02f86e3752728a66ab9cdf51bf3a2bd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8a473f5118b1ad5ec54a23fac782cfceb02f86e3752728a66ab9cdf51bf3a2bd/diff: no such file or directory, extraDiskErr: <nil>
	Sep 25 10:38:22 addons-440446 kubelet[1557]: E0925 10:38:22.054336    1557 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/eb9ef74df9a8a7434bba4f833d641624a1701e5c80420209407a2aeb5e1a4992/diff" to get inode usage: stat /var/lib/containers/storage/overlay/eb9ef74df9a8a7434bba4f833d641624a1701e5c80420209407a2aeb5e1a4992/diff: no such file or directory, extraDiskErr: <nil>
	Sep 25 10:38:22 addons-440446 kubelet[1557]: I0925 10:38:22.687387    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpn9t\" (UniqueName: \"kubernetes.io/projected/f8c32fc4-78a1-4124-a57d-171517f70e26-kube-api-access-rpn9t\") pod \"f8c32fc4-78a1-4124-a57d-171517f70e26\" (UID: \"f8c32fc4-78a1-4124-a57d-171517f70e26\") "
	Sep 25 10:38:22 addons-440446 kubelet[1557]: I0925 10:38:22.689254    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c32fc4-78a1-4124-a57d-171517f70e26-kube-api-access-rpn9t" (OuterVolumeSpecName: "kube-api-access-rpn9t") pod "f8c32fc4-78a1-4124-a57d-171517f70e26" (UID: "f8c32fc4-78a1-4124-a57d-171517f70e26"). InnerVolumeSpecName "kube-api-access-rpn9t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 10:38:22 addons-440446 kubelet[1557]: I0925 10:38:22.788499    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rpn9t\" (UniqueName: \"kubernetes.io/projected/f8c32fc4-78a1-4124-a57d-171517f70e26-kube-api-access-rpn9t\") on node \"addons-440446\" DevicePath \"\""
	Sep 25 10:38:22 addons-440446 kubelet[1557]: I0925 10:38:22.873859    1557 scope.go:117] "RemoveContainer" containerID="3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5"
	Sep 25 10:38:22 addons-440446 kubelet[1557]: I0925 10:38:22.882577    1557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-m7dx5" podStartSLOduration=1.124251085 podCreationTimestamp="2023-09-25 10:38:21 +0000 UTC" firstStartedPulling="2023-09-25 10:38:21.948937726 +0000 UTC m=+240.170064365" lastFinishedPulling="2023-09-25 10:38:22.707215979 +0000 UTC m=+240.928342628" observedRunningTime="2023-09-25 10:38:22.881968778 +0000 UTC m=+241.103095428" watchObservedRunningTime="2023-09-25 10:38:22.882529348 +0000 UTC m=+241.103655999"
	Sep 25 10:38:22 addons-440446 kubelet[1557]: I0925 10:38:22.890595    1557 scope.go:117] "RemoveContainer" containerID="3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5"
	Sep 25 10:38:22 addons-440446 kubelet[1557]: E0925 10:38:22.891044    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5\": container with ID starting with 3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5 not found: ID does not exist" containerID="3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5"
	Sep 25 10:38:22 addons-440446 kubelet[1557]: I0925 10:38:22.891096    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5"} err="failed to get container status \"3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5\": rpc error: code = NotFound desc = could not find container \"3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5\": container with ID starting with 3d339cb33991377a37bcd71ee9d3a04e54cee4f6d5c096978550f457323393f5 not found: ID does not exist"
	Sep 25 10:38:23 addons-440446 kubelet[1557]: I0925 10:38:23.869853    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="480b98ec-be92-4921-a6ea-68369bdefb7d" path="/var/lib/kubelet/pods/480b98ec-be92-4921-a6ea-68369bdefb7d/volumes"
	Sep 25 10:38:23 addons-440446 kubelet[1557]: I0925 10:38:23.870210    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9d11636f-70eb-4076-8fd5-771fa6732985" path="/var/lib/kubelet/pods/9d11636f-70eb-4076-8fd5-771fa6732985/volumes"
	Sep 25 10:38:23 addons-440446 kubelet[1557]: I0925 10:38:23.870486    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f8c32fc4-78a1-4124-a57d-171517f70e26" path="/var/lib/kubelet/pods/f8c32fc4-78a1-4124-a57d-171517f70e26/volumes"
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.712458    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps5sx\" (UniqueName: \"kubernetes.io/projected/7347bdfe-b4fb-45c2-9da7-d6695437c6de-kube-api-access-ps5sx\") pod \"7347bdfe-b4fb-45c2-9da7-d6695437c6de\" (UID: \"7347bdfe-b4fb-45c2-9da7-d6695437c6de\") "
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.712520    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7347bdfe-b4fb-45c2-9da7-d6695437c6de-webhook-cert\") pod \"7347bdfe-b4fb-45c2-9da7-d6695437c6de\" (UID: \"7347bdfe-b4fb-45c2-9da7-d6695437c6de\") "
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.714354    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7347bdfe-b4fb-45c2-9da7-d6695437c6de-kube-api-access-ps5sx" (OuterVolumeSpecName: "kube-api-access-ps5sx") pod "7347bdfe-b4fb-45c2-9da7-d6695437c6de" (UID: "7347bdfe-b4fb-45c2-9da7-d6695437c6de"). InnerVolumeSpecName "kube-api-access-ps5sx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.714858    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7347bdfe-b4fb-45c2-9da7-d6695437c6de-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7347bdfe-b4fb-45c2-9da7-d6695437c6de" (UID: "7347bdfe-b4fb-45c2-9da7-d6695437c6de"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.813449    1557 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7347bdfe-b4fb-45c2-9da7-d6695437c6de-webhook-cert\") on node \"addons-440446\" DevicePath \"\""
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.813483    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ps5sx\" (UniqueName: \"kubernetes.io/projected/7347bdfe-b4fb-45c2-9da7-d6695437c6de-kube-api-access-ps5sx\") on node \"addons-440446\" DevicePath \"\""
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.883231    1557 scope.go:117] "RemoveContainer" containerID="27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9"
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.899552    1557 scope.go:117] "RemoveContainer" containerID="27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9"
	Sep 25 10:38:26 addons-440446 kubelet[1557]: E0925 10:38:26.899946    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9\": container with ID starting with 27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9 not found: ID does not exist" containerID="27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9"
	Sep 25 10:38:26 addons-440446 kubelet[1557]: I0925 10:38:26.899987    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9"} err="failed to get container status \"27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9\": rpc error: code = NotFound desc = could not find container \"27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9\": container with ID starting with 27a048afd87d8efa86657d259c33c7b11817122d023e16ba562c9851cb5b8ff9 not found: ID does not exist"
	Sep 25 10:38:27 addons-440446 kubelet[1557]: I0925 10:38:27.869976    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7347bdfe-b4fb-45c2-9da7-d6695437c6de" path="/var/lib/kubelet/pods/7347bdfe-b4fb-45c2-9da7-d6695437c6de/volumes"
	
	* 
	* ==> storage-provisioner [500aff2324cec715f5df1feb67cf381c4945722dbfee2efff78c99ea8b1abeb3] <==
	* I0925 10:35:06.694245       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 10:35:06.701294       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 10:35:06.701328       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 10:35:06.706572       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 10:35:06.706698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-440446_6a53cac3-8032-4d6f-9e29-010b5877f63f!
	I0925 10:35:06.706714       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46474567-e469-41e6-9a00-b2bc96359304", APIVersion:"v1", ResourceVersion:"821", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-440446_6a53cac3-8032-4d6f-9e29-010b5877f63f became leader
	I0925 10:35:06.807021       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-440446_6a53cac3-8032-4d6f-9e29-010b5877f63f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-440446 -n addons-440446
helpers_test.go:261: (dbg) Run:  kubectl --context addons-440446 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.348845029s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 image ls: (2.207557404s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-104204" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (180.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-260900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-260900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.81715318s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-260900 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-260900 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d71ae27b-41b0-4965-8ab5-555a1be0b2e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d71ae27b-41b0-4965-8ab5-555a1be0b2e0] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.007182859s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-260900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0925 10:45:44.740166   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:46:12.425915   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-260900 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.206929949s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-260900 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-260900 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.004473738s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-260900 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-260900 addons disable ingress-dns --alsologtostderr -v=1: (2.354804264s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-260900 addons disable ingress --alsologtostderr -v=1
E0925 10:47:16.174421   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:16.179709   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:16.190029   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:16.210252   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:16.250531   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:16.330871   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:16.491262   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:16.811808   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:17.452723   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-260900 addons disable ingress --alsologtostderr -v=1: (7.367583314s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-260900
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-260900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7c985e16e6d68f2747976bc8172b0a3996d480d0cec3f5af9634869196c0908",
	        "Created": "2023-09-25T10:43:19.707425366Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51950,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-25T10:43:19.973396749Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:88569b0f9771c4a82a68adf2effda10d3720003bb7c688860ef975d692546171",
	        "ResolvConfPath": "/var/lib/docker/containers/c7c985e16e6d68f2747976bc8172b0a3996d480d0cec3f5af9634869196c0908/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7c985e16e6d68f2747976bc8172b0a3996d480d0cec3f5af9634869196c0908/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7c985e16e6d68f2747976bc8172b0a3996d480d0cec3f5af9634869196c0908/hosts",
	        "LogPath": "/var/lib/docker/containers/c7c985e16e6d68f2747976bc8172b0a3996d480d0cec3f5af9634869196c0908/c7c985e16e6d68f2747976bc8172b0a3996d480d0cec3f5af9634869196c0908-json.log",
	        "Name": "/ingress-addon-legacy-260900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-260900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-260900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bf903bba89aa1df0d45ea0104eea0833909c89fd792e21c39f43aff2f1abcca3-init/diff:/var/lib/docker/overlay2/f6c0857361d94c26f0cbf62f9795a30e8812e7f7d65e2dc29161b25ea9a7ede1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf903bba89aa1df0d45ea0104eea0833909c89fd792e21c39f43aff2f1abcca3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf903bba89aa1df0d45ea0104eea0833909c89fd792e21c39f43aff2f1abcca3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf903bba89aa1df0d45ea0104eea0833909c89fd792e21c39f43aff2f1abcca3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-260900",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-260900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-260900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-260900",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-260900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "42873b26ea6a945bfab08f67f02a9ac1d5dabdfa9f047e2909f19babee784712",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/42873b26ea6a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-260900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7c985e16e6d",
	                        "ingress-addon-legacy-260900"
	                    ],
	                    "NetworkID": "de2f18a7be9b4726e6193bfec880f4a93371dc2de1bdcbb88873f1dac584912e",
	                    "EndpointID": "6751605e277713a69da5202fa4bcff15b1664db683612f70b9fba5b52d0e9f0a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-260900 -n ingress-addon-legacy-260900
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-260900 logs -n 25
E0925 10:47:18.733520   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-260900 logs -n 25: (1.006272186s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-104204                                                  | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup493022370/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| ssh            | functional-104204 ssh findmnt                                         | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | -T /mount1                                                            |                             |         |         |                     |                     |
	| ssh            | functional-104204 ssh findmnt                                         | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | -T /mount2                                                            |                             |         |         |                     |                     |
	| ssh            | functional-104204 ssh findmnt                                         | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | -T /mount3                                                            |                             |         |         |                     |                     |
	| mount          | -p functional-104204                                                  | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC |                     |
	|                | --kill=true                                                           |                             |         |         |                     |                     |
	| update-context | functional-104204                                                     | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| update-context | functional-104204                                                     | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| update-context | functional-104204                                                     | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| image          | functional-104204 image ls                                            | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	| image          | functional-104204 image save --daemon                                 | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-104204              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-104204                                                     | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | image ls --format short                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| ssh            | functional-104204 ssh pgrep                                           | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC |                     |
	|                | buildkitd                                                             |                             |         |         |                     |                     |
	| image          | functional-104204                                                     | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | image ls --format yaml                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-104204                                                     | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | image ls --format json                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-104204                                                     | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | image ls --format table                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-104204 image build -t                                      | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	|                | localhost/my-image:functional-104204                                  |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                             |         |         |                     |                     |
	| image          | functional-104204 image ls                                            | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:42 UTC | 25 Sep 23 10:42 UTC |
	| delete         | -p functional-104204                                                  | functional-104204           | jenkins | v1.31.2 | 25 Sep 23 10:43 UTC | 25 Sep 23 10:43 UTC |
	| start          | -p ingress-addon-legacy-260900                                        | ingress-addon-legacy-260900 | jenkins | v1.31.2 | 25 Sep 23 10:43 UTC | 25 Sep 23 10:44 UTC |
	|                | --kubernetes-version=v1.18.20                                         |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                  |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                              |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-260900                                           | ingress-addon-legacy-260900 | jenkins | v1.31.2 | 25 Sep 23 10:44 UTC | 25 Sep 23 10:44 UTC |
	|                | addons enable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-260900                                           | ingress-addon-legacy-260900 | jenkins | v1.31.2 | 25 Sep 23 10:44 UTC | 25 Sep 23 10:44 UTC |
	|                | addons enable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-260900                                           | ingress-addon-legacy-260900 | jenkins | v1.31.2 | 25 Sep 23 10:44 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                         |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                          |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-260900 ip                                        | ingress-addon-legacy-260900 | jenkins | v1.31.2 | 25 Sep 23 10:46 UTC | 25 Sep 23 10:46 UTC |
	| addons         | ingress-addon-legacy-260900                                           | ingress-addon-legacy-260900 | jenkins | v1.31.2 | 25 Sep 23 10:47 UTC | 25 Sep 23 10:47 UTC |
	|                | addons disable ingress-dns                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-260900                                           | ingress-addon-legacy-260900 | jenkins | v1.31.2 | 25 Sep 23 10:47 UTC | 25 Sep 23 10:47 UTC |
	|                | addons disable ingress                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 10:43:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 10:43:06.022192   51318 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:43:06.022461   51318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:43:06.022470   51318 out.go:309] Setting ErrFile to fd 2...
	I0925 10:43:06.022475   51318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:43:06.022666   51318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 10:43:06.023235   51318 out.go:303] Setting JSON to false
	I0925 10:43:06.024294   51318 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1538,"bootTime":1695637048,"procs":511,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:43:06.024347   51318 start.go:138] virtualization: kvm guest
	I0925 10:43:06.027122   51318 out.go:177] * [ingress-addon-legacy-260900] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:43:06.028905   51318 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 10:43:06.030248   51318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:43:06.028904   51318 notify.go:220] Checking for updates...
	I0925 10:43:06.032873   51318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:43:06.034252   51318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 10:43:06.035613   51318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 10:43:06.037001   51318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 10:43:06.038406   51318 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:43:06.060150   51318 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 10:43:06.060277   51318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:43:06.111338   51318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-09-25 10:43:06.102999643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:43:06.111494   51318 docker.go:294] overlay module found
	I0925 10:43:06.114313   51318 out.go:177] * Using the docker driver based on user configuration
	I0925 10:43:06.115706   51318 start.go:298] selected driver: docker
	I0925 10:43:06.115720   51318 start.go:902] validating driver "docker" against <nil>
	I0925 10:43:06.115734   51318 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 10:43:06.116463   51318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:43:06.166615   51318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-09-25 10:43:06.158576029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:43:06.166770   51318 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 10:43:06.166964   51318 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 10:43:06.168906   51318 out.go:177] * Using Docker driver with root privileges
	I0925 10:43:06.170368   51318 cni.go:84] Creating CNI manager for ""
	I0925 10:43:06.170384   51318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0925 10:43:06.170395   51318 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0925 10:43:06.170405   51318 start_flags.go:321] config:
	{Name:ingress-addon-legacy-260900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-260900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:43:06.171831   51318 out.go:177] * Starting control plane node ingress-addon-legacy-260900 in cluster ingress-addon-legacy-260900
	I0925 10:43:06.173118   51318 cache.go:122] Beginning downloading kic base image for docker with crio
	I0925 10:43:06.174423   51318 out.go:177] * Pulling base image ...
	I0925 10:43:06.175620   51318 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0925 10:43:06.175647   51318 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0925 10:43:06.191228   51318 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I0925 10:43:06.191258   51318 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I0925 10:43:06.205499   51318 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0925 10:43:06.205528   51318 cache.go:57] Caching tarball of preloaded images
	I0925 10:43:06.205680   51318 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0925 10:43:06.207635   51318 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0925 10:43:06.209050   51318 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0925 10:43:06.245057   51318 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0925 10:43:11.400514   51318 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0925 10:43:11.400603   51318 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0925 10:43:12.538390   51318 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0925 10:43:12.538794   51318 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/config.json ...
	I0925 10:43:12.538829   51318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/config.json: {Name:mk954274e639dca3f698e7cc18cd1b6d3e609327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:12.538999   51318 cache.go:195] Successfully downloaded all kic artifacts
	I0925 10:43:12.539021   51318 start.go:365] acquiring machines lock for ingress-addon-legacy-260900: {Name:mk9a6b38694034ab4ed2cdd1b79ab20c4d43c9e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 10:43:12.539063   51318 start.go:369] acquired machines lock for "ingress-addon-legacy-260900" in 32.796µs
	I0925 10:43:12.539080   51318 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-260900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-260900 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0925 10:43:12.539148   51318 start.go:125] createHost starting for "" (driver="docker")
	I0925 10:43:12.541450   51318 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0925 10:43:12.541707   51318 start.go:159] libmachine.API.Create for "ingress-addon-legacy-260900" (driver="docker")
	I0925 10:43:12.541738   51318 client.go:168] LocalClient.Create starting
	I0925 10:43:12.541852   51318 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem
	I0925 10:43:12.541885   51318 main.go:141] libmachine: Decoding PEM data...
	I0925 10:43:12.541905   51318 main.go:141] libmachine: Parsing certificate...
	I0925 10:43:12.541952   51318 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem
	I0925 10:43:12.541979   51318 main.go:141] libmachine: Decoding PEM data...
	I0925 10:43:12.541993   51318 main.go:141] libmachine: Parsing certificate...
	I0925 10:43:12.542273   51318 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-260900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0925 10:43:12.557414   51318 cli_runner.go:211] docker network inspect ingress-addon-legacy-260900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0925 10:43:12.557481   51318 network_create.go:281] running [docker network inspect ingress-addon-legacy-260900] to gather additional debugging logs...
	I0925 10:43:12.557500   51318 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-260900
	W0925 10:43:12.572428   51318 cli_runner.go:211] docker network inspect ingress-addon-legacy-260900 returned with exit code 1
	I0925 10:43:12.572457   51318 network_create.go:284] error running [docker network inspect ingress-addon-legacy-260900]: docker network inspect ingress-addon-legacy-260900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-260900 not found
	I0925 10:43:12.572478   51318 network_create.go:286] output of [docker network inspect ingress-addon-legacy-260900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-260900 not found
	
	** /stderr **
	I0925 10:43:12.572525   51318 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0925 10:43:12.587585   51318 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00128aa40}
	I0925 10:43:12.587630   51318 network_create.go:123] attempt to create docker network ingress-addon-legacy-260900 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0925 10:43:12.587675   51318 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-260900 ingress-addon-legacy-260900
	I0925 10:43:12.638791   51318 network_create.go:107] docker network ingress-addon-legacy-260900 192.168.49.0/24 created
	I0925 10:43:12.638821   51318 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-260900" container
	I0925 10:43:12.638870   51318 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0925 10:43:12.653270   51318 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-260900 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-260900 --label created_by.minikube.sigs.k8s.io=true
	I0925 10:43:12.668870   51318 oci.go:103] Successfully created a docker volume ingress-addon-legacy-260900
	I0925 10:43:12.668934   51318 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-260900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-260900 --entrypoint /usr/bin/test -v ingress-addon-legacy-260900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0925 10:43:14.431354   51318 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-260900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-260900 --entrypoint /usr/bin/test -v ingress-addon-legacy-260900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (1.762367541s)
	I0925 10:43:14.431385   51318 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-260900
	I0925 10:43:14.431403   51318 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0925 10:43:14.431422   51318 kic.go:190] Starting extracting preloaded images to volume ...
	I0925 10:43:14.431468   51318 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-260900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0925 10:43:19.642536   51318 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-260900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (5.211010325s)
	I0925 10:43:19.642566   51318 kic.go:199] duration metric: took 5.211142 seconds to extract preloaded images to volume
	W0925 10:43:19.642689   51318 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0925 10:43:19.642797   51318 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0925 10:43:19.693406   51318 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-260900 --name ingress-addon-legacy-260900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-260900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-260900 --network ingress-addon-legacy-260900 --ip 192.168.49.2 --volume ingress-addon-legacy-260900:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0925 10:43:19.981691   51318 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-260900 --format={{.State.Running}}
	I0925 10:43:19.999385   51318 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-260900 --format={{.State.Status}}
	I0925 10:43:20.016546   51318 cli_runner.go:164] Run: docker exec ingress-addon-legacy-260900 stat /var/lib/dpkg/alternatives/iptables
	I0925 10:43:20.053999   51318 oci.go:144] the created container "ingress-addon-legacy-260900" has a running status.
	I0925 10:43:20.054048   51318 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa...
	I0925 10:43:20.179013   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0925 10:43:20.179057   51318 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0925 10:43:20.197306   51318 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-260900 --format={{.State.Status}}
	I0925 10:43:20.212284   51318 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0925 10:43:20.212304   51318 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-260900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0925 10:43:20.273315   51318 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-260900 --format={{.State.Status}}
	I0925 10:43:20.288173   51318 machine.go:88] provisioning docker machine ...
	I0925 10:43:20.288208   51318 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-260900"
	I0925 10:43:20.288254   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:20.302915   51318 main.go:141] libmachine: Using SSH client type: native
	I0925 10:43:20.303371   51318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0925 10:43:20.303403   51318 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-260900 && echo "ingress-addon-legacy-260900" | sudo tee /etc/hostname
	I0925 10:43:20.304132   51318 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35770->127.0.0.1:32787: read: connection reset by peer
	I0925 10:43:23.446459   51318 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-260900
	
	I0925 10:43:23.446543   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:23.462624   51318 main.go:141] libmachine: Using SSH client type: native
	I0925 10:43:23.462958   51318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0925 10:43:23.462987   51318 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-260900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-260900/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-260900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 10:43:23.588418   51318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 10:43:23.588455   51318 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17297-5744/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-5744/.minikube}
	I0925 10:43:23.588475   51318 ubuntu.go:177] setting up certificates
	I0925 10:43:23.588486   51318 provision.go:83] configureAuth start
	I0925 10:43:23.588531   51318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-260900
	I0925 10:43:23.603619   51318 provision.go:138] copyHostCerts
	I0925 10:43:23.603651   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem
	I0925 10:43:23.603687   51318 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem, removing ...
	I0925 10:43:23.603700   51318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem
	I0925 10:43:23.603770   51318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem (1078 bytes)
	I0925 10:43:23.603856   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem
	I0925 10:43:23.603881   51318 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem, removing ...
	I0925 10:43:23.603890   51318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem
	I0925 10:43:23.603922   51318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem (1123 bytes)
	I0925 10:43:23.603981   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem
	I0925 10:43:23.604002   51318 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem, removing ...
	I0925 10:43:23.604010   51318 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem
	I0925 10:43:23.604041   51318 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem (1675 bytes)
	I0925 10:43:23.604100   51318 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-260900 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-260900]
	I0925 10:43:23.648053   51318 provision.go:172] copyRemoteCerts
	I0925 10:43:23.648116   51318 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 10:43:23.648153   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:23.663973   51318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa Username:docker}
	I0925 10:43:23.752568   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0925 10:43:23.752648   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 10:43:23.772340   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0925 10:43:23.772393   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0925 10:43:23.792300   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0925 10:43:23.792371   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 10:43:23.812426   51318 provision.go:86] duration metric: configureAuth took 223.928958ms
	I0925 10:43:23.812452   51318 ubuntu.go:193] setting minikube options for container-runtime
	I0925 10:43:23.812658   51318 config.go:182] Loaded profile config "ingress-addon-legacy-260900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0925 10:43:23.812768   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:23.829033   51318 main.go:141] libmachine: Using SSH client type: native
	I0925 10:43:23.829353   51318 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0925 10:43:23.829370   51318 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0925 10:43:24.058997   51318 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0925 10:43:24.059022   51318 machine.go:91] provisioned docker machine in 3.770830212s
	I0925 10:43:24.059034   51318 client.go:171] LocalClient.Create took 11.517288821s
	I0925 10:43:24.059056   51318 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-260900" took 11.517346805s
	I0925 10:43:24.059068   51318 start.go:300] post-start starting for "ingress-addon-legacy-260900" (driver="docker")
	I0925 10:43:24.059077   51318 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 10:43:24.059139   51318 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 10:43:24.059183   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:24.075122   51318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa Username:docker}
	I0925 10:43:24.164684   51318 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 10:43:24.167383   51318 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0925 10:43:24.167411   51318 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0925 10:43:24.167420   51318 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0925 10:43:24.167428   51318 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0925 10:43:24.167438   51318 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/addons for local assets ...
	I0925 10:43:24.167494   51318 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/files for local assets ...
	I0925 10:43:24.167580   51318 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> 125162.pem in /etc/ssl/certs
	I0925 10:43:24.167591   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> /etc/ssl/certs/125162.pem
	I0925 10:43:24.167691   51318 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 10:43:24.175019   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem --> /etc/ssl/certs/125162.pem (1708 bytes)
	I0925 10:43:24.195258   51318 start.go:303] post-start completed in 136.175587ms
	I0925 10:43:24.195579   51318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-260900
	I0925 10:43:24.210976   51318 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/config.json ...
	I0925 10:43:24.211221   51318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 10:43:24.211269   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:24.226311   51318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa Username:docker}
	I0925 10:43:24.312966   51318 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0925 10:43:24.316827   51318 start.go:128] duration metric: createHost completed in 11.777666222s
	I0925 10:43:24.316854   51318 start.go:83] releasing machines lock for "ingress-addon-legacy-260900", held for 11.777777827s
	I0925 10:43:24.316916   51318 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-260900
	I0925 10:43:24.332618   51318 ssh_runner.go:195] Run: cat /version.json
	I0925 10:43:24.332693   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:24.332702   51318 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 10:43:24.332767   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:24.350407   51318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa Username:docker}
	I0925 10:43:24.350791   51318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa Username:docker}
	I0925 10:43:24.532924   51318 ssh_runner.go:195] Run: systemctl --version
	I0925 10:43:24.536933   51318 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0925 10:43:24.671420   51318 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0925 10:43:24.675398   51318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 10:43:24.691589   51318 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0925 10:43:24.691678   51318 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 10:43:24.716043   51318 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0925 10:43:24.716069   51318 start.go:469] detecting cgroup driver to use...
	I0925 10:43:24.716100   51318 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0925 10:43:24.716144   51318 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 10:43:24.728696   51318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 10:43:24.738345   51318 docker.go:197] disabling cri-docker service (if available) ...
	I0925 10:43:24.738404   51318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0925 10:43:24.749710   51318 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0925 10:43:24.761591   51318 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0925 10:43:24.840859   51318 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0925 10:43:24.919869   51318 docker.go:213] disabling docker service ...
	I0925 10:43:24.919932   51318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0925 10:43:24.936467   51318 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0925 10:43:24.945800   51318 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0925 10:43:25.016479   51318 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0925 10:43:25.097987   51318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0925 10:43:25.108081   51318 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 10:43:25.121575   51318 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0925 10:43:25.121636   51318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:43:25.129908   51318 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0925 10:43:25.129964   51318 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:43:25.138369   51318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:43:25.146579   51318 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:43:25.154774   51318 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 10:43:25.162337   51318 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 10:43:25.169597   51318 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 10:43:25.176421   51318 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 10:43:25.247622   51318 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0925 10:43:25.336874   51318 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0925 10:43:25.336939   51318 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0925 10:43:25.340052   51318 start.go:537] Will wait 60s for crictl version
	I0925 10:43:25.340099   51318 ssh_runner.go:195] Run: which crictl
	I0925 10:43:25.342883   51318 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 10:43:25.374127   51318 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0925 10:43:25.374196   51318 ssh_runner.go:195] Run: crio --version
	I0925 10:43:25.405863   51318 ssh_runner.go:195] Run: crio --version
	I0925 10:43:25.440506   51318 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0925 10:43:25.442043   51318 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-260900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0925 10:43:25.457190   51318 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0925 10:43:25.460543   51318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 10:43:25.469948   51318 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0925 10:43:25.469997   51318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0925 10:43:25.511354   51318 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0925 10:43:25.511407   51318 ssh_runner.go:195] Run: which lz4
	I0925 10:43:25.514437   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0925 10:43:25.514506   51318 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0925 10:43:25.517285   51318 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0925 10:43:25.517308   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0925 10:43:26.408396   51318 crio.go:444] Took 0.893888 seconds to copy over tarball
	I0925 10:43:26.408469   51318 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0925 10:43:28.721939   51318 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.313439382s)
	I0925 10:43:28.721966   51318 crio.go:451] Took 2.313542 seconds to extract the tarball
	I0925 10:43:28.721978   51318 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0925 10:43:28.788919   51318 ssh_runner.go:195] Run: sudo crictl images --output json
	I0925 10:43:28.819592   51318 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0925 10:43:28.819614   51318 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0925 10:43:28.819699   51318 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 10:43:28.819721   51318 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0925 10:43:28.819731   51318 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0925 10:43:28.819743   51318 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0925 10:43:28.819767   51318 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0925 10:43:28.819789   51318 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0925 10:43:28.819719   51318 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 10:43:28.819701   51318 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0925 10:43:28.820924   51318 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0925 10:43:28.820934   51318 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0925 10:43:28.820940   51318 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0925 10:43:28.820953   51318 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 10:43:28.820960   51318 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0925 10:43:28.820925   51318 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 10:43:28.820965   51318 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0925 10:43:28.821263   51318 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0925 10:43:29.008078   51318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0925 10:43:29.027611   51318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0925 10:43:29.028917   51318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 10:43:29.044664   51318 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0925 10:43:29.044706   51318 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0925 10:43:29.044738   51318 ssh_runner.go:195] Run: which crictl
	I0925 10:43:29.061163   51318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0925 10:43:29.062943   51318 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0925 10:43:29.062984   51318 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0925 10:43:29.063025   51318 ssh_runner.go:195] Run: which crictl
	I0925 10:43:29.084471   51318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0925 10:43:29.098578   51318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0925 10:43:29.105665   51318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 10:43:29.111903   51318 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0925 10:43:29.167230   51318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0925 10:43:29.167270   51318 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0925 10:43:29.167308   51318 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0925 10:43:29.167321   51318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0925 10:43:29.167350   51318 ssh_runner.go:195] Run: which crictl
	I0925 10:43:29.167360   51318 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0925 10:43:29.167388   51318 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0925 10:43:29.167412   51318 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0925 10:43:29.167433   51318 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0925 10:43:29.167445   51318 ssh_runner.go:195] Run: which crictl
	I0925 10:43:29.167462   51318 ssh_runner.go:195] Run: which crictl
	I0925 10:43:29.173262   51318 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0925 10:43:29.173309   51318 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 10:43:29.173327   51318 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0925 10:43:29.173351   51318 ssh_runner.go:195] Run: which crictl
	I0925 10:43:29.173362   51318 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0925 10:43:29.173403   51318 ssh_runner.go:195] Run: which crictl
	I0925 10:43:29.248112   51318 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0925 10:43:29.248166   51318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0925 10:43:29.250154   51318 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0925 10:43:29.250173   51318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0925 10:43:29.250217   51318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0925 10:43:29.250294   51318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0925 10:43:29.250332   51318 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0925 10:43:29.351105   51318 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0925 10:43:29.354169   51318 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0925 10:43:29.354237   51318 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0925 10:43:29.357038   51318 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0925 10:43:29.357054   51318 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0925 10:43:29.357095   51318 cache_images.go:92] LoadImages completed in 537.466597ms
	W0925 10:43:29.357163   51318 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0925 10:43:29.357228   51318 ssh_runner.go:195] Run: crio config
	I0925 10:43:29.395887   51318 cni.go:84] Creating CNI manager for ""
	I0925 10:43:29.395916   51318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0925 10:43:29.395942   51318 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 10:43:29.395959   51318 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-260900 NodeName:ingress-addon-legacy-260900 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0925 10:43:29.396133   51318 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-260900"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 10:43:29.396233   51318 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-260900 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-260900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 10:43:29.396294   51318 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0925 10:43:29.403925   51318 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 10:43:29.403979   51318 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 10:43:29.411136   51318 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0925 10:43:29.425552   51318 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0925 10:43:29.441023   51318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0925 10:43:29.455927   51318 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0925 10:43:29.458791   51318 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 10:43:29.467771   51318 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900 for IP: 192.168.49.2
	I0925 10:43:29.467803   51318 certs.go:190] acquiring lock for shared ca certs: {Name:mk1dc4321044392bda6d0b04ee5f4e5cca314d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:29.467945   51318 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key
	I0925 10:43:29.468012   51318 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key
	I0925 10:43:29.468069   51318 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.key
	I0925 10:43:29.468089   51318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt with IP's: []
	I0925 10:43:29.702565   51318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt ...
	I0925 10:43:29.702595   51318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: {Name:mkcada3c1f8dc2552c3a0b2dfc896a749702980e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:29.702753   51318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.key ...
	I0925 10:43:29.702764   51318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.key: {Name:mka6598c23ac174fee73749fe823406a0f59801c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:29.702834   51318 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.key.dd3b5fb2
	I0925 10:43:29.702848   51318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 10:43:29.793073   51318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.crt.dd3b5fb2 ...
	I0925 10:43:29.793100   51318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.crt.dd3b5fb2: {Name:mke6dfc69c190865ffb08e371e0db30063a9668f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:29.793243   51318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.key.dd3b5fb2 ...
	I0925 10:43:29.793279   51318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.key.dd3b5fb2: {Name:mk7da6cac95fff2adcdca89464798e5f7713d0c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:29.793351   51318 certs.go:337] copying /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.crt
	I0925 10:43:29.793431   51318 certs.go:341] copying /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.key
	I0925 10:43:29.793490   51318 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.key
	I0925 10:43:29.793504   51318 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.crt with IP's: []
	I0925 10:43:29.852910   51318 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.crt ...
	I0925 10:43:29.852936   51318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.crt: {Name:mk3d51f65a30023e6d49bd278759e4cb9222cce6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:29.853076   51318 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.key ...
	I0925 10:43:29.853086   51318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.key: {Name:mk723e625df9ef8f4c6b3ea2617f7c2d931f38f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:29.853145   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0925 10:43:29.853161   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0925 10:43:29.853171   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0925 10:43:29.853182   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0925 10:43:29.853192   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0925 10:43:29.853205   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0925 10:43:29.853217   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0925 10:43:29.853229   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0925 10:43:29.853274   51318 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem (1338 bytes)
	W0925 10:43:29.853304   51318 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516_empty.pem, impossibly tiny 0 bytes
	I0925 10:43:29.853314   51318 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 10:43:29.853343   51318 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem (1078 bytes)
	I0925 10:43:29.853365   51318 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem (1123 bytes)
	I0925 10:43:29.853391   51318 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem (1675 bytes)
	I0925 10:43:29.853436   51318 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem (1708 bytes)
	I0925 10:43:29.853466   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem -> /usr/share/ca-certificates/12516.pem
	I0925 10:43:29.853479   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> /usr/share/ca-certificates/125162.pem
	I0925 10:43:29.853491   51318 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:43:29.854037   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 10:43:29.875571   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 10:43:29.895759   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 10:43:29.915216   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0925 10:43:29.935044   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 10:43:29.954719   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 10:43:29.975192   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 10:43:29.995246   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 10:43:30.015445   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem --> /usr/share/ca-certificates/12516.pem (1338 bytes)
	I0925 10:43:30.035765   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem --> /usr/share/ca-certificates/125162.pem (1708 bytes)
	I0925 10:43:30.056047   51318 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 10:43:30.075996   51318 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 10:43:30.090503   51318 ssh_runner.go:195] Run: openssl version
	I0925 10:43:30.095151   51318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12516.pem && ln -fs /usr/share/ca-certificates/12516.pem /etc/ssl/certs/12516.pem"
	I0925 10:43:30.102938   51318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12516.pem
	I0925 10:43:30.105825   51318 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:39 /usr/share/ca-certificates/12516.pem
	I0925 10:43:30.105868   51318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12516.pem
	I0925 10:43:30.111696   51318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12516.pem /etc/ssl/certs/51391683.0"
	I0925 10:43:30.119307   51318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125162.pem && ln -fs /usr/share/ca-certificates/125162.pem /etc/ssl/certs/125162.pem"
	I0925 10:43:30.127139   51318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125162.pem
	I0925 10:43:30.130065   51318 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:39 /usr/share/ca-certificates/125162.pem
	I0925 10:43:30.130103   51318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125162.pem
	I0925 10:43:30.135802   51318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125162.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 10:43:30.143265   51318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 10:43:30.150728   51318 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:43:30.153630   51318 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:43:30.153671   51318 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:43:30.159369   51318 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 10:43:30.166805   51318 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 10:43:30.169634   51318 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 10:43:30.169685   51318 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-260900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-260900 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:43:30.169746   51318 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0925 10:43:30.169775   51318 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0925 10:43:30.199923   51318 cri.go:89] found id: ""
	I0925 10:43:30.199984   51318 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 10:43:30.207469   51318 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 10:43:30.214612   51318 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0925 10:43:30.214655   51318 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 10:43:30.221782   51318 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 10:43:30.221823   51318 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0925 10:43:30.263034   51318 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0925 10:43:30.263119   51318 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 10:43:30.298847   51318 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0925 10:43:30.298917   51318 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1042-gcp
	I0925 10:43:30.298978   51318 kubeadm.go:322] OS: Linux
	I0925 10:43:30.299058   51318 kubeadm.go:322] CGROUPS_CPU: enabled
	I0925 10:43:30.299127   51318 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0925 10:43:30.299211   51318 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0925 10:43:30.299285   51318 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0925 10:43:30.299357   51318 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0925 10:43:30.299427   51318 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0925 10:43:30.362777   51318 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 10:43:30.362941   51318 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 10:43:30.363080   51318 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 10:43:30.533625   51318 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 10:43:30.534528   51318 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 10:43:30.534591   51318 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 10:43:30.604575   51318 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 10:43:30.608431   51318 out.go:204]   - Generating certificates and keys ...
	I0925 10:43:30.608570   51318 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 10:43:30.608701   51318 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 10:43:30.796362   51318 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 10:43:30.900139   51318 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 10:43:31.153298   51318 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 10:43:31.277495   51318 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 10:43:31.326079   51318 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 10:43:31.326258   51318 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-260900 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0925 10:43:31.487485   51318 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 10:43:31.487693   51318 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-260900 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0925 10:43:31.688065   51318 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 10:43:31.799173   51318 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 10:43:31.876370   51318 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 10:43:31.876480   51318 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 10:43:32.009894   51318 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 10:43:32.176186   51318 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 10:43:32.343368   51318 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 10:43:32.565517   51318 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 10:43:32.566126   51318 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 10:43:32.568033   51318 out.go:204]   - Booting up control plane ...
	I0925 10:43:32.568148   51318 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 10:43:32.571359   51318 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 10:43:32.573117   51318 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 10:43:32.574117   51318 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 10:43:32.576189   51318 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 10:43:39.078619   51318 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502324 seconds
	I0925 10:43:39.078771   51318 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 10:43:39.088693   51318 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 10:43:39.606034   51318 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 10:43:39.606232   51318 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-260900 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0925 10:43:40.113719   51318 kubeadm.go:322] [bootstrap-token] Using token: uqy2ut.94cbly6znruuhb3v
	I0925 10:43:40.115241   51318 out.go:204]   - Configuring RBAC rules ...
	I0925 10:43:40.115343   51318 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 10:43:40.118475   51318 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 10:43:40.123936   51318 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 10:43:40.126900   51318 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 10:43:40.128562   51318 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 10:43:40.130296   51318 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 10:43:40.136433   51318 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 10:43:40.377021   51318 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 10:43:40.528279   51318 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 10:43:40.529365   51318 kubeadm.go:322] 
	I0925 10:43:40.529477   51318 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 10:43:40.529494   51318 kubeadm.go:322] 
	I0925 10:43:40.529589   51318 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 10:43:40.529603   51318 kubeadm.go:322] 
	I0925 10:43:40.529632   51318 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 10:43:40.529716   51318 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 10:43:40.529795   51318 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 10:43:40.529802   51318 kubeadm.go:322] 
	I0925 10:43:40.529875   51318 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 10:43:40.529993   51318 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 10:43:40.530060   51318 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 10:43:40.530067   51318 kubeadm.go:322] 
	I0925 10:43:40.530174   51318 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 10:43:40.530287   51318 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 10:43:40.530297   51318 kubeadm.go:322] 
	I0925 10:43:40.530397   51318 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uqy2ut.94cbly6znruuhb3v \
	I0925 10:43:40.530501   51318 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 \
	I0925 10:43:40.530527   51318 kubeadm.go:322]     --control-plane 
	I0925 10:43:40.530544   51318 kubeadm.go:322] 
	I0925 10:43:40.530668   51318 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 10:43:40.530682   51318 kubeadm.go:322] 
	I0925 10:43:40.530794   51318 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uqy2ut.94cbly6znruuhb3v \
	I0925 10:43:40.530939   51318 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 
	I0925 10:43:40.532059   51318 kubeadm.go:322] W0925 10:43:30.262561    1388 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0925 10:43:40.532313   51318 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1042-gcp\n", err: exit status 1
	I0925 10:43:40.532406   51318 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 10:43:40.532547   51318 kubeadm.go:322] W0925 10:43:32.571068    1388 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0925 10:43:40.532687   51318 kubeadm.go:322] W0925 10:43:32.572469    1388 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0925 10:43:40.532709   51318 cni.go:84] Creating CNI manager for ""
	I0925 10:43:40.532719   51318 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0925 10:43:40.535310   51318 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0925 10:43:40.536498   51318 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0925 10:43:40.539887   51318 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0925 10:43:40.539900   51318 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0925 10:43:40.554959   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0925 10:43:40.945660   51318 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 10:43:40.945752   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:40.945752   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=ingress-addon-legacy-260900 minikube.k8s.io/updated_at=2023_09_25T10_43_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:40.952423   51318 ops.go:34] apiserver oom_adj: -16
	I0925 10:43:41.058429   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:41.121881   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:41.686791   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:42.186652   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:42.686167   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:43.186437   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:43.686701   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:44.186630   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:44.686449   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:45.186914   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:45.686960   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:46.186454   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:46.687019   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:47.186548   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:47.686461   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:48.186138   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:48.686975   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:49.186947   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:49.686286   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:50.186693   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:50.686843   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:51.186189   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:51.686726   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:52.187193   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:52.686926   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:53.186324   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:53.686133   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:54.186847   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:54.687106   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:55.186364   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:55.686929   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:56.186164   51318 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:43:56.248558   51318 kubeadm.go:1081] duration metric: took 15.302869752s to wait for elevateKubeSystemPrivileges.
	I0925 10:43:56.248598   51318 kubeadm.go:406] StartCluster complete in 26.078915686s
	I0925 10:43:56.248672   51318 settings.go:142] acquiring lock: {Name:mk1ac20708e0ba811b0d8618989be560267b849d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:56.248765   51318 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:43:56.249604   51318 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/kubeconfig: {Name:mkcd9251a91cb443db17b5c9d69f4674dad74ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:43:56.249854   51318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 10:43:56.249957   51318 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 10:43:56.250045   51318 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-260900"
	I0925 10:43:56.250068   51318 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-260900"
	I0925 10:43:56.250069   51318 config.go:182] Loaded profile config "ingress-addon-legacy-260900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0925 10:43:56.250085   51318 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-260900"
	I0925 10:43:56.250108   51318 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-260900"
	I0925 10:43:56.250114   51318 host.go:66] Checking if "ingress-addon-legacy-260900" exists ...
	I0925 10:43:56.250478   51318 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-260900 --format={{.State.Status}}
	I0925 10:43:56.250427   51318 kapi.go:59] client config for ingress-addon-legacy-260900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.key", CAFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf6d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 10:43:56.250576   51318 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-260900 --format={{.State.Status}}
	I0925 10:43:56.251165   51318 cert_rotation.go:137] Starting client certificate rotation controller
	I0925 10:43:56.272273   51318 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 10:43:56.275624   51318 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 10:43:56.275645   51318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 10:43:56.275706   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:56.272345   51318 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-260900" context rescaled to 1 replicas
	I0925 10:43:56.276006   51318 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0925 10:43:56.279759   51318 out.go:177] * Verifying Kubernetes components...
	I0925 10:43:56.281293   51318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:43:56.282818   51318 kapi.go:59] client config for ingress-addon-legacy-260900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.key", CAFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf6d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 10:43:56.286690   51318 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-260900"
	I0925 10:43:56.286734   51318 host.go:66] Checking if "ingress-addon-legacy-260900" exists ...
	I0925 10:43:56.287226   51318 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-260900 --format={{.State.Status}}
	I0925 10:43:56.303818   51318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa Username:docker}
	I0925 10:43:56.310642   51318 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 10:43:56.310663   51318 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 10:43:56.310705   51318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-260900
	I0925 10:43:56.326430   51318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/ingress-addon-legacy-260900/id_rsa Username:docker}
	I0925 10:43:56.469529   51318 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 10:43:56.470106   51318 kapi.go:59] client config for ingress-addon-legacy-260900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.key", CAFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf6d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 10:43:56.470434   51318 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-260900" to be "Ready" ...
	I0925 10:43:56.562613   51318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 10:43:56.564402   51318 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 10:43:56.775480   51318 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0925 10:43:56.908740   51318 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0925 10:43:56.910097   51318 addons.go:502] enable addons completed in 660.137456ms: enabled=[storage-provisioner default-storageclass]
	I0925 10:43:58.478854   51318 node_ready.go:58] node "ingress-addon-legacy-260900" has status "Ready":"False"
	I0925 10:44:00.978177   51318 node_ready.go:49] node "ingress-addon-legacy-260900" has status "Ready":"True"
	I0925 10:44:00.978200   51318 node_ready.go:38] duration metric: took 4.507735126s waiting for node "ingress-addon-legacy-260900" to be "Ready" ...
	I0925 10:44:00.978211   51318 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 10:44:00.984024   51318 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-sw55h" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:02.991549   51318 pod_ready.go:102] pod "coredns-66bff467f8-sw55h" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-25 10:43:55 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0925 10:44:05.491629   51318 pod_ready.go:102] pod "coredns-66bff467f8-sw55h" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-25 10:43:55 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0925 10:44:07.493082   51318 pod_ready.go:92] pod "coredns-66bff467f8-sw55h" in "kube-system" namespace has status "Ready":"True"
	I0925 10:44:07.493107   51318 pod_ready.go:81] duration metric: took 6.509057755s waiting for pod "coredns-66bff467f8-sw55h" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.493119   51318 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-260900" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.496950   51318 pod_ready.go:92] pod "etcd-ingress-addon-legacy-260900" in "kube-system" namespace has status "Ready":"True"
	I0925 10:44:07.496970   51318 pod_ready.go:81] duration metric: took 3.843166ms waiting for pod "etcd-ingress-addon-legacy-260900" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.496984   51318 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-260900" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.500860   51318 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-260900" in "kube-system" namespace has status "Ready":"True"
	I0925 10:44:07.500879   51318 pod_ready.go:81] duration metric: took 3.888466ms waiting for pod "kube-apiserver-ingress-addon-legacy-260900" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.500886   51318 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-260900" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.504260   51318 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-260900" in "kube-system" namespace has status "Ready":"True"
	I0925 10:44:07.504275   51318 pod_ready.go:81] duration metric: took 3.383781ms waiting for pod "kube-controller-manager-ingress-addon-legacy-260900" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.504283   51318 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j9xwk" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.507715   51318 pod_ready.go:92] pod "kube-proxy-j9xwk" in "kube-system" namespace has status "Ready":"True"
	I0925 10:44:07.507734   51318 pod_ready.go:81] duration metric: took 3.445221ms waiting for pod "kube-proxy-j9xwk" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.507743   51318 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-260900" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.689134   51318 request.go:629] Waited for 181.330591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-260900
	I0925 10:44:07.888999   51318 request.go:629] Waited for 197.373041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-260900
	I0925 10:44:07.891650   51318 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-260900" in "kube-system" namespace has status "Ready":"True"
	I0925 10:44:07.891672   51318 pod_ready.go:81] duration metric: took 383.920339ms waiting for pod "kube-scheduler-ingress-addon-legacy-260900" in "kube-system" namespace to be "Ready" ...
	I0925 10:44:07.891685   51318 pod_ready.go:38] duration metric: took 6.913463348s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 10:44:07.891705   51318 api_server.go:52] waiting for apiserver process to appear ...
	I0925 10:44:07.891756   51318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 10:44:07.901824   51318 api_server.go:72] duration metric: took 11.62578093s to wait for apiserver process to appear ...
	I0925 10:44:07.901843   51318 api_server.go:88] waiting for apiserver healthz status ...
	I0925 10:44:07.901856   51318 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0925 10:44:07.906827   51318 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0925 10:44:07.907536   51318 api_server.go:141] control plane version: v1.18.20
	I0925 10:44:07.907554   51318 api_server.go:131] duration metric: took 5.70589ms to wait for apiserver health ...
	I0925 10:44:07.907561   51318 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 10:44:08.088936   51318 request.go:629] Waited for 181.321823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0925 10:44:08.093750   51318 system_pods.go:59] 8 kube-system pods found
	I0925 10:44:08.093781   51318 system_pods.go:61] "coredns-66bff467f8-sw55h" [f31fb59a-4c71-44bf-8e6f-b45256ab48df] Running
	I0925 10:44:08.093788   51318 system_pods.go:61] "etcd-ingress-addon-legacy-260900" [4c6c7f9b-859f-4381-8295-08ac8e607894] Running
	I0925 10:44:08.093792   51318 system_pods.go:61] "kindnet-ss2wc" [05aae564-7f17-4c7a-8b5e-54fd6185eaa0] Running
	I0925 10:44:08.093796   51318 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-260900" [e5942be4-5234-4e9b-a1d2-d35002cfebde] Running
	I0925 10:44:08.093800   51318 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-260900" [bdcdac6b-3152-479b-84a5-0cb607071f3d] Running
	I0925 10:44:08.093803   51318 system_pods.go:61] "kube-proxy-j9xwk" [a110eb81-979f-4367-b976-9df1ccf5d1cf] Running
	I0925 10:44:08.093807   51318 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-260900" [d00cc123-2423-4507-9009-e49e4001135f] Running
	I0925 10:44:08.093811   51318 system_pods.go:61] "storage-provisioner" [9e902316-add6-4f1b-b067-7fb2ecbf9461] Running
	I0925 10:44:08.093818   51318 system_pods.go:74] duration metric: took 186.25183ms to wait for pod list to return data ...
	I0925 10:44:08.093825   51318 default_sa.go:34] waiting for default service account to be created ...
	I0925 10:44:08.288734   51318 request.go:629] Waited for 194.830898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0925 10:44:08.290898   51318 default_sa.go:45] found service account: "default"
	I0925 10:44:08.290923   51318 default_sa.go:55] duration metric: took 197.091312ms for default service account to be created ...
	I0925 10:44:08.290934   51318 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 10:44:08.489358   51318 request.go:629] Waited for 198.350907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0925 10:44:08.494382   51318 system_pods.go:86] 8 kube-system pods found
	I0925 10:44:08.494410   51318 system_pods.go:89] "coredns-66bff467f8-sw55h" [f31fb59a-4c71-44bf-8e6f-b45256ab48df] Running
	I0925 10:44:08.494424   51318 system_pods.go:89] "etcd-ingress-addon-legacy-260900" [4c6c7f9b-859f-4381-8295-08ac8e607894] Running
	I0925 10:44:08.494430   51318 system_pods.go:89] "kindnet-ss2wc" [05aae564-7f17-4c7a-8b5e-54fd6185eaa0] Running
	I0925 10:44:08.494436   51318 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-260900" [e5942be4-5234-4e9b-a1d2-d35002cfebde] Running
	I0925 10:44:08.494446   51318 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-260900" [bdcdac6b-3152-479b-84a5-0cb607071f3d] Running
	I0925 10:44:08.494454   51318 system_pods.go:89] "kube-proxy-j9xwk" [a110eb81-979f-4367-b976-9df1ccf5d1cf] Running
	I0925 10:44:08.494461   51318 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-260900" [d00cc123-2423-4507-9009-e49e4001135f] Running
	I0925 10:44:08.494471   51318 system_pods.go:89] "storage-provisioner" [9e902316-add6-4f1b-b067-7fb2ecbf9461] Running
	I0925 10:44:08.494482   51318 system_pods.go:126] duration metric: took 203.541044ms to wait for k8s-apps to be running ...
	I0925 10:44:08.494495   51318 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 10:44:08.494548   51318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:44:08.504675   51318 system_svc.go:56] duration metric: took 10.173036ms WaitForService to wait for kubelet.
	I0925 10:44:08.504701   51318 kubeadm.go:581] duration metric: took 12.228662483s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 10:44:08.504720   51318 node_conditions.go:102] verifying NodePressure condition ...
	I0925 10:44:08.689120   51318 request.go:629] Waited for 184.337241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0925 10:44:08.691911   51318 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0925 10:44:08.691937   51318 node_conditions.go:123] node cpu capacity is 8
	I0925 10:44:08.691948   51318 node_conditions.go:105] duration metric: took 187.222989ms to run NodePressure ...
	I0925 10:44:08.691958   51318 start.go:228] waiting for startup goroutines ...
	I0925 10:44:08.691964   51318 start.go:233] waiting for cluster config update ...
	I0925 10:44:08.691972   51318 start.go:242] writing updated cluster config ...
	I0925 10:44:08.692233   51318 ssh_runner.go:195] Run: rm -f paused
	I0925 10:44:08.735398   51318 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I0925 10:44:08.737452   51318 out.go:177] 
	W0925 10:44:08.739093   51318 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0925 10:44:08.740740   51318 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0925 10:44:08.742217   51318 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-260900" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 25 10:46:55 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:46:55.072384259Z" level=info msg="Started container" PID=4937 containerID=2b4c4cd6e5fc05399588c92296a149afa3b7746fd11212a4e00af991a4d70de5 description=default/hello-world-app-5f5d8b66bb-28xrc/hello-world-app id=1bff4243-bca9-4bd7-8e79-b42c2150dbd6 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=906e3df38573209768c5c0c640bc683fe78a92603944b2d748fc6c414019525d
	Sep 25 10:47:02 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:02.759328684Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=a347982e-4e27-45f1-b8d7-d0e705fe2737 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 25 10:47:10 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:10.759804191Z" level=info msg="Stopping pod sandbox: 485a3a68d787760ae64d24eda7c0ed62a48217138554ff8b3b78e6daa90f9951" id=f13384bd-33c6-4b8d-bc15-0cc1e9e4615a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:10 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:10.760924397Z" level=info msg="Stopped pod sandbox: 485a3a68d787760ae64d24eda7c0ed62a48217138554ff8b3b78e6daa90f9951" id=f13384bd-33c6-4b8d-bc15-0cc1e9e4615a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:10 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:10.769511383Z" level=info msg="Stopping pod sandbox: 485a3a68d787760ae64d24eda7c0ed62a48217138554ff8b3b78e6daa90f9951" id=3eee9719-e6a2-425b-8f29-5da5fab5490b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:10 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:10.769555575Z" level=info msg="Stopped pod sandbox (already stopped): 485a3a68d787760ae64d24eda7c0ed62a48217138554ff8b3b78e6daa90f9951" id=3eee9719-e6a2-425b-8f29-5da5fab5490b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:11 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:11.505767981Z" level=info msg="Stopping container: 0ad374f69ca52798d4112b5141c29efd6e497d5dcc3026d9543ee2039a44865a (timeout: 2s)" id=d513f13e-530d-4e17-9715-ae07d0ce4040 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 25 10:47:11 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:11.508662284Z" level=info msg="Stopping container: 0ad374f69ca52798d4112b5141c29efd6e497d5dcc3026d9543ee2039a44865a (timeout: 2s)" id=f71d3d9f-7d71-4c18-9936-92fd458600a8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 25 10:47:12 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:12.758912980Z" level=info msg="Stopping pod sandbox: 485a3a68d787760ae64d24eda7c0ed62a48217138554ff8b3b78e6daa90f9951" id=b2107f7d-d4b0-41d3-a545-14d10b9d2c2f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:12 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:12.758972494Z" level=info msg="Stopped pod sandbox (already stopped): 485a3a68d787760ae64d24eda7c0ed62a48217138554ff8b3b78e6daa90f9951" id=b2107f7d-d4b0-41d3-a545-14d10b9d2c2f name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.516034033Z" level=warning msg="Stopping container 0ad374f69ca52798d4112b5141c29efd6e497d5dcc3026d9543ee2039a44865a with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=d513f13e-530d-4e17-9715-ae07d0ce4040 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 25 10:47:13 ingress-addon-legacy-260900 conmon[3474]: conmon 0ad374f69ca52798d411 <ninfo>: container 3485 exited with status 137
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.679502793Z" level=info msg="Stopped container 0ad374f69ca52798d4112b5141c29efd6e497d5dcc3026d9543ee2039a44865a: ingress-nginx/ingress-nginx-controller-7fcf777cb7-srbd5/controller" id=f71d3d9f-7d71-4c18-9936-92fd458600a8 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.679574550Z" level=info msg="Stopped container 0ad374f69ca52798d4112b5141c29efd6e497d5dcc3026d9543ee2039a44865a: ingress-nginx/ingress-nginx-controller-7fcf777cb7-srbd5/controller" id=d513f13e-530d-4e17-9715-ae07d0ce4040 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.680142788Z" level=info msg="Stopping pod sandbox: 55e81811e6fcf514252d92d5c2b3a188bd7b01167ee8c1f530088d8a102eaeab" id=288b1128-2f5a-4f8d-be27-cf7ffaafcd20 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.680246925Z" level=info msg="Stopping pod sandbox: 55e81811e6fcf514252d92d5c2b3a188bd7b01167ee8c1f530088d8a102eaeab" id=41dc7fe4-5e9e-48d4-a6aa-98bc28d02df4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.682894055Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-7SJGCTMIYSESQRIQ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-7ATGVG373R6ZLLBI - [0:0]\n-X KUBE-HP-7ATGVG373R6ZLLBI\n-X KUBE-HP-7SJGCTMIYSESQRIQ\nCOMMIT\n"
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.684141182Z" level=info msg="Closing host port tcp:80"
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.684184381Z" level=info msg="Closing host port tcp:443"
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.685134937Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.685160221Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.685300120Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-srbd5 Namespace:ingress-nginx ID:55e81811e6fcf514252d92d5c2b3a188bd7b01167ee8c1f530088d8a102eaeab UID:62e3760d-4311-49ff-9d7a-dfc2ff961123 NetNS:/var/run/netns/443556c1-63ce-498e-b1d4-4e72409f9dac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.685419590Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-srbd5 from CNI network \"kindnet\" (type=ptp)"
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.734182800Z" level=info msg="Stopped pod sandbox: 55e81811e6fcf514252d92d5c2b3a188bd7b01167ee8c1f530088d8a102eaeab" id=288b1128-2f5a-4f8d-be27-cf7ffaafcd20 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 25 10:47:13 ingress-addon-legacy-260900 crio[960]: time="2023-09-25 10:47:13.734338219Z" level=info msg="Stopped pod sandbox (already stopped): 55e81811e6fcf514252d92d5c2b3a188bd7b01167ee8c1f530088d8a102eaeab" id=41dc7fe4-5e9e-48d4-a6aa-98bc28d02df4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b4c4cd6e5fc0       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb            24 seconds ago      Running             hello-world-app           0                   906e3df385732       hello-world-app-5f5d8b66bb-28xrc
	19a9c795d39fc       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   323fc821eabb9       nginx
	0ad374f69ca52       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   55e81811e6fcf       ingress-nginx-controller-7fcf777cb7-srbd5
	88ecb530086b5       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   c873a0dbf88e1       ingress-nginx-admission-patch-lkkms
	f2605dca54a66       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   26d325396819b       ingress-nginx-admission-create-vmtjb
	0b92ffbc03ad9       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   c9c515864139d       coredns-66bff467f8-sw55h
	b6a74182c286e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   0548add9cf7f4       storage-provisioner
	5b0fb8b77717c       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   cb13d8449ceae       kindnet-ss2wc
	d37ed65378816       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   cf8a278cc679d       kube-proxy-j9xwk
	239fcff0bda04       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   cb040b1c8b052       etcd-ingress-addon-legacy-260900
	9c1168a492a76       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   3ab25c15390d9       kube-apiserver-ingress-addon-legacy-260900
	6e7d22725d672       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   9c481234b9715       kube-controller-manager-ingress-addon-legacy-260900
	feaff0b9b21ca       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   b2f96f81a6677       kube-scheduler-ingress-addon-legacy-260900
	
	* 
	* ==> coredns [0b92ffbc03ad90cc4a39d586d060224c586f118391acb39dce9777a6e8e083b4] <==
	* [INFO] 10.244.0.5:57567 - 1809 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005731548s
	[INFO] 10.244.0.5:57567 - 52760 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005660207s
	[INFO] 10.244.0.5:43352 - 24904 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005686215s
	[INFO] 10.244.0.5:46353 - 14441 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006013591s
	[INFO] 10.244.0.5:54793 - 53246 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005864699s
	[INFO] 10.244.0.5:53157 - 57230 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005825004s
	[INFO] 10.244.0.5:48860 - 55302 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005962761s
	[INFO] 10.244.0.5:51373 - 38545 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006092368s
	[INFO] 10.244.0.5:45539 - 59909 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006068993s
	[INFO] 10.244.0.5:43352 - 49568 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005503512s
	[INFO] 10.244.0.5:53157 - 20375 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005566897s
	[INFO] 10.244.0.5:54793 - 31883 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005651576s
	[INFO] 10.244.0.5:48860 - 50211 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005619581s
	[INFO] 10.244.0.5:43352 - 34261 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069585s
	[INFO] 10.244.0.5:53157 - 61577 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056241s
	[INFO] 10.244.0.5:57567 - 64553 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005986245s
	[INFO] 10.244.0.5:48860 - 35866 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072641s
	[INFO] 10.244.0.5:51373 - 4909 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005829096s
	[INFO] 10.244.0.5:46353 - 51918 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006143357s
	[INFO] 10.244.0.5:45539 - 55051 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00597198s
	[INFO] 10.244.0.5:54793 - 23020 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000222889s
	[INFO] 10.244.0.5:57567 - 28832 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090834s
	[INFO] 10.244.0.5:51373 - 64732 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000117243s
	[INFO] 10.244.0.5:46353 - 25024 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058131s
	[INFO] 10.244.0.5:45539 - 6264 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000139491s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-260900
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-260900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=ingress-addon-legacy-260900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T10_43_40_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:43:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-260900
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 10:47:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 10:47:10 +0000   Mon, 25 Sep 2023 10:43:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 10:47:10 +0000   Mon, 25 Sep 2023 10:43:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 10:47:10 +0000   Mon, 25 Sep 2023 10:43:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 10:47:10 +0000   Mon, 25 Sep 2023 10:44:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-260900
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a4cbfecaa63403aa53a616d62c7e14f
	  System UUID:                e02f6b26-32a2-4ee5-8d4d-c39d50f01ad1
	  Boot ID:                    a0198791-e836-4d6b-a7bd-f74954d514fc
	  Kernel Version:             5.15.0-1042-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-28xrc                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-sw55h                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m24s
	  kube-system                 etcd-ingress-addon-legacy-260900                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kindnet-ss2wc                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m24s
	  kube-system                 kube-apiserver-ingress-addon-legacy-260900             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-260900    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-proxy-j9xwk                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-scheduler-ingress-addon-legacy-260900             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m39s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s  kubelet     Node ingress-addon-legacy-260900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s  kubelet     Node ingress-addon-legacy-260900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s  kubelet     Node ingress-addon-legacy-260900 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m24s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m19s  kubelet     Node ingress-addon-legacy-260900 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004937] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006662] FS-Cache: N-cookie d=00000000d258af6f{9p.inode} n=0000000062fac2c7
	[  +0.008748] FS-Cache: N-key=[8] '92a00f0200000000'
	[  +4.069384] FS-Cache: Duplicate cookie detected
	[  +0.004741] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000002b1f06e9{9P.session} n=00000000674afd8c
	[  +0.007526] FS-Cache: O-key=[10] '34323935323731363530'
	[  +0.005389] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006558] FS-Cache: N-cookie d=000000002b1f06e9{9P.session} n=000000000141c22c
	[  +0.007506] FS-Cache: N-key=[10] '34323935323731363530'
	[  +8.038207] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep25 10:44] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[  +1.008185] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[  +2.015763] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[  +4.063598] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[  +8.191166] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[Sep25 10:45] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[ +33.020822] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	
	* 
	* ==> etcd [239fcff0bda0434d2246764cda38c50ac2176ed5ae03f8061ead06d1b6e5bd45] <==
	* raft2023/09/25 10:43:33 INFO: aec36adc501070cc became follower at term 0
	raft2023/09/25 10:43:33 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/25 10:43:33 INFO: aec36adc501070cc became follower at term 1
	raft2023/09/25 10:43:33 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-25 10:43:33.781296 W | auth: simple token is not cryptographically signed
	2023-09-25 10:43:33.847004 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-25 10:43:33.849453 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/25 10:43:33 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-25 10:43:33.849820 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-25 10:43:33.849942 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-09-25 10:43:33.849992 I | embed: listening for peers on 192.168.49.2:2380
	2023-09-25 10:43:33.850484 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/09/25 10:43:34 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/09/25 10:43:34 INFO: aec36adc501070cc became candidate at term 2
	raft2023/09/25 10:43:34 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/09/25 10:43:34 INFO: aec36adc501070cc became leader at term 2
	raft2023/09/25 10:43:34 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-09-25 10:43:34.377197 I | embed: ready to serve client requests
	2023-09-25 10:43:34.377338 I | etcdserver: published {Name:ingress-addon-legacy-260900 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-09-25 10:43:34.377364 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-25 10:43:34.377420 I | embed: ready to serve client requests
	2023-09-25 10:43:34.377755 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-25 10:43:34.377818 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-25 10:43:34.378856 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-25 10:43:34.379016 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  10:47:19 up 29 min,  0 users,  load average: 0.12, 0.54, 0.46
	Linux ingress-addon-legacy-260900 5.15.0-1042-gcp #50~20.04.1-Ubuntu SMP Mon Sep 11 03:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5b0fb8b77717c1444cf6ac42989dd0879d0c4ff7e7733576ff89cf73fc29a942] <==
	* I0925 10:45:19.014043       1 main.go:227] handling current node
	I0925 10:45:29.017510       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:45:29.017535       1 main.go:227] handling current node
	I0925 10:45:39.027449       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:45:39.027473       1 main.go:227] handling current node
	I0925 10:45:49.031115       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:45:49.031143       1 main.go:227] handling current node
	I0925 10:45:59.043214       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:45:59.043239       1 main.go:227] handling current node
	I0925 10:46:09.046629       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:46:09.046656       1 main.go:227] handling current node
	I0925 10:46:19.049914       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:46:19.049947       1 main.go:227] handling current node
	I0925 10:46:29.053627       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:46:29.053650       1 main.go:227] handling current node
	I0925 10:46:39.064128       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:46:39.064155       1 main.go:227] handling current node
	I0925 10:46:49.067656       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:46:49.067687       1 main.go:227] handling current node
	I0925 10:46:59.079691       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:46:59.079719       1 main.go:227] handling current node
	I0925 10:47:09.083023       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:47:09.083047       1 main.go:227] handling current node
	I0925 10:47:19.091750       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0925 10:47:19.091773       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [9c1168a492a7687fb2cb6e3b915acdfc845f86c185898fc5116cd922ce0c50cb] <==
	* E0925 10:43:37.656399       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0925 10:43:37.756922       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0925 10:43:37.781926       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0925 10:43:37.802597       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0925 10:43:37.804058       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0925 10:43:37.846380       1 cache.go:39] Caches are synced for autoregister controller
	I0925 10:43:38.581032       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0925 10:43:38.581059       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0925 10:43:38.585420       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0925 10:43:38.588061       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0925 10:43:38.588082       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0925 10:43:38.853206       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 10:43:38.880406       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0925 10:43:38.971490       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0925 10:43:38.972221       1 controller.go:609] quota admission added evaluator for: endpoints
	I0925 10:43:38.974921       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 10:43:39.905499       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0925 10:43:40.368622       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0925 10:43:40.520951       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0925 10:43:40.726623       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0925 10:43:55.470729       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0925 10:43:55.474714       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0925 10:44:09.386023       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0925 10:44:32.673724       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0925 10:47:10.769105       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc00cd880e8), encoder:(*versioning.codec)(0xc00a7d2640), buf:(*bytes.Buffer)(0xc00c1ead20)})
	
	* 
	* ==> kube-controller-manager [6e7d22725d672a44aa3f97d6f416b8596fc31a8d61fe898a8e2b78df389bc392] <==
	* I0925 10:43:55.726356       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0925 10:43:55.821957       1 shared_informer.go:230] Caches are synced for service account 
	I0925 10:43:55.822185       1 shared_informer.go:230] Caches are synced for endpoint 
	I0925 10:43:55.862580       1 shared_informer.go:230] Caches are synced for namespace 
	I0925 10:43:55.922292       1 shared_informer.go:230] Caches are synced for attach detach 
	I0925 10:43:55.963388       1 shared_informer.go:230] Caches are synced for resource quota 
	I0925 10:43:55.972082       1 shared_informer.go:230] Caches are synced for stateful set 
	I0925 10:43:55.972515       1 shared_informer.go:230] Caches are synced for resource quota 
	I0925 10:43:55.974159       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0925 10:43:56.003846       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0925 10:43:56.003870       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0925 10:43:56.021733       1 shared_informer.go:230] Caches are synced for disruption 
	I0925 10:43:56.021758       1 disruption.go:339] Sending events to api server.
	I0925 10:43:56.036021       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0925 10:43:56.353859       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"efbeb74f-3b9d-4ca6-b212-91dcbfd92dbc", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0925 10:43:56.447969       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"02815aaa-fdc9-4410-ab4c-2035b560f7a1", APIVersion:"apps/v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-48j9b
	I0925 10:44:05.389328       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0925 10:44:09.378244       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d5fa35c4-ca86-4ffc-b205-6dc0cf4f072b", APIVersion:"apps/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0925 10:44:09.388136       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"991e662e-ac24-4ec8-9a1c-fc057afed488", APIVersion:"apps/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-srbd5
	I0925 10:44:09.456450       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7ec9c651-a3fb-41b1-b68b-8c460210eadc", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-vmtjb
	I0925 10:44:09.466675       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0d5a12c4-7e81-4b43-bc7f-ce739a17eb53", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-lkkms
	I0925 10:44:12.816397       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0d5a12c4-7e81-4b43-bc7f-ce739a17eb53", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0925 10:44:12.822896       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"7ec9c651-a3fb-41b1-b68b-8c460210eadc", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0925 10:46:53.235584       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"587b6083-5686-41ca-b55c-59bbdc928d3d", APIVersion:"apps/v1", ResourceVersion:"698", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0925 10:46:53.241321       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"90063602-fad2-4953-95ba-bc4bc64de33b", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-28xrc
	
	* 
	* ==> kube-proxy [d37ed65378816644e8ec95cf5b9d9425e5e385179088d0d1f230f6a3841746ad] <==
	* W0925 10:43:55.967951       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0925 10:43:55.974720       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0925 10:43:55.974745       1 server_others.go:186] Using iptables Proxier.
	I0925 10:43:55.975000       1 server.go:583] Version: v1.18.20
	I0925 10:43:55.975502       1 config.go:133] Starting endpoints config controller
	I0925 10:43:55.975525       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0925 10:43:55.975539       1 config.go:315] Starting service config controller
	I0925 10:43:55.975559       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0925 10:43:56.075696       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0925 10:43:56.075723       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [feaff0b9b21ca9c5620b7fc58ae5eb618b1191dd9c909a48565e684e49644c16] <==
	* W0925 10:43:37.652354       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0925 10:43:37.652386       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0925 10:43:37.664887       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0925 10:43:37.664983       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0925 10:43:37.667038       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 10:43:37.667069       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0925 10:43:37.667139       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0925 10:43:37.748291       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0925 10:43:37.749559       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 10:43:37.751450       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 10:43:37.751551       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:43:37.751622       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 10:43:37.751639       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:43:37.751707       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:43:37.751740       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:43:37.751832       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 10:43:37.751871       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:43:37.751918       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 10:43:37.752046       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 10:43:37.752170       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 10:43:38.607939       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:43:38.660855       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:43:38.694677       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:43:38.762537       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0925 10:43:39.167210       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Sep 25 10:46:37 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:46:37.759673    1877 pod_workers.go:191] Error syncing pod 5b4fc0e6-f1c8-4487-842a-a1e5cb221c04 ("kube-ingress-dns-minikube_kube-system(5b4fc0e6-f1c8-4487-842a-a1e5cb221c04)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 25 10:46:48 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:46:48.759500    1877 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 25 10:46:48 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:46:48.759533    1877 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 25 10:46:48 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:46:48.759583    1877 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 25 10:46:48 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:46:48.759618    1877 pod_workers.go:191] Error syncing pod 5b4fc0e6-f1c8-4487-842a-a1e5cb221c04 ("kube-ingress-dns-minikube_kube-system(5b4fc0e6-f1c8-4487-842a-a1e5cb221c04)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 25 10:46:53 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:46:53.246516    1877 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 25 10:46:53 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:46:53.396428    1877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-s9ntl" (UniqueName: "kubernetes.io/secret/53ed4307-2ff5-4fd3-90ea-ad7d93e28482-default-token-s9ntl") pod "hello-world-app-5f5d8b66bb-28xrc" (UID: "53ed4307-2ff5-4fd3-90ea-ad7d93e28482")
	Sep 25 10:46:53 ingress-addon-legacy-260900 kubelet[1877]: W0925 10:46:53.593555    1877 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/c7c985e16e6d68f2747976bc8172b0a3996d480d0cec3f5af9634869196c0908/crio-906e3df38573209768c5c0c640bc683fe78a92603944b2d748fc6c414019525d WatchSource:0}: Error finding container 906e3df38573209768c5c0c640bc683fe78a92603944b2d748fc6c414019525d: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0010bf5e0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Sep 25 10:47:02 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:47:02.759687    1877 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 25 10:47:02 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:47:02.759730    1877 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 25 10:47:02 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:47:02.759783    1877 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 25 10:47:02 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:47:02.759811    1877 pod_workers.go:191] Error syncing pod 5b4fc0e6-f1c8-4487-842a-a1e5cb221c04 ("kube-ingress-dns-minikube_kube-system(5b4fc0e6-f1c8-4487-842a-a1e5cb221c04)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 25 10:47:09 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:09.032448    1877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-f55vp" (UniqueName: "kubernetes.io/secret/5b4fc0e6-f1c8-4487-842a-a1e5cb221c04-minikube-ingress-dns-token-f55vp") pod "5b4fc0e6-f1c8-4487-842a-a1e5cb221c04" (UID: "5b4fc0e6-f1c8-4487-842a-a1e5cb221c04")
	Sep 25 10:47:09 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:09.034413    1877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b4fc0e6-f1c8-4487-842a-a1e5cb221c04-minikube-ingress-dns-token-f55vp" (OuterVolumeSpecName: "minikube-ingress-dns-token-f55vp") pod "5b4fc0e6-f1c8-4487-842a-a1e5cb221c04" (UID: "5b4fc0e6-f1c8-4487-842a-a1e5cb221c04"). InnerVolumeSpecName "minikube-ingress-dns-token-f55vp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 25 10:47:09 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:09.132774    1877 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-f55vp" (UniqueName: "kubernetes.io/secret/5b4fc0e6-f1c8-4487-842a-a1e5cb221c04-minikube-ingress-dns-token-f55vp") on node "ingress-addon-legacy-260900" DevicePath ""
	Sep 25 10:47:11 ingress-addon-legacy-260900 kubelet[1877]: W0925 10:47:11.096060    1877 pod_container_deletor.go:77] Container "485a3a68d787760ae64d24eda7c0ed62a48217138554ff8b3b78e6daa90f9951" not found in pod's containers
	Sep 25 10:47:11 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:47:11.507867    1877 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-srbd5.17881e888b3bd92b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-srbd5", UID:"62e3760d-4311-49ff-9d7a-dfc2ff961123", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-260900"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13c772bde1ec32b, ext:211189406350, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13c772bde1ec32b, ext:211189406350, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-srbd5.17881e888b3bd92b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 25 10:47:11 ingress-addon-legacy-260900 kubelet[1877]: E0925 10:47:11.511101    1877 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-srbd5.17881e888b3bd92b", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-srbd5", UID:"62e3760d-4311-49ff-9d7a-dfc2ff961123", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-260900"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13c772bde1ec32b, ext:211189406350, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13c772bde48cbb6, ext:211192161054, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-srbd5.17881e888b3bd92b" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 25 10:47:14 ingress-addon-legacy-260900 kubelet[1877]: W0925 10:47:14.101443    1877 pod_container_deletor.go:77] Container "55e81811e6fcf514252d92d5c2b3a188bd7b01167ee8c1f530088d8a102eaeab" not found in pod's containers
	Sep 25 10:47:15 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:15.654526    1877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-2fnz4" (UniqueName: "kubernetes.io/secret/62e3760d-4311-49ff-9d7a-dfc2ff961123-ingress-nginx-token-2fnz4") pod "62e3760d-4311-49ff-9d7a-dfc2ff961123" (UID: "62e3760d-4311-49ff-9d7a-dfc2ff961123")
	Sep 25 10:47:15 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:15.654582    1877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/62e3760d-4311-49ff-9d7a-dfc2ff961123-webhook-cert") pod "62e3760d-4311-49ff-9d7a-dfc2ff961123" (UID: "62e3760d-4311-49ff-9d7a-dfc2ff961123")
	Sep 25 10:47:15 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:15.656448    1877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e3760d-4311-49ff-9d7a-dfc2ff961123-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "62e3760d-4311-49ff-9d7a-dfc2ff961123" (UID: "62e3760d-4311-49ff-9d7a-dfc2ff961123"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 25 10:47:15 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:15.656749    1877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e3760d-4311-49ff-9d7a-dfc2ff961123-ingress-nginx-token-2fnz4" (OuterVolumeSpecName: "ingress-nginx-token-2fnz4") pod "62e3760d-4311-49ff-9d7a-dfc2ff961123" (UID: "62e3760d-4311-49ff-9d7a-dfc2ff961123"). InnerVolumeSpecName "ingress-nginx-token-2fnz4". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 25 10:47:15 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:15.754873    1877 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/62e3760d-4311-49ff-9d7a-dfc2ff961123-webhook-cert") on node "ingress-addon-legacy-260900" DevicePath ""
	Sep 25 10:47:15 ingress-addon-legacy-260900 kubelet[1877]: I0925 10:47:15.754918    1877 reconciler.go:319] Volume detached for volume "ingress-nginx-token-2fnz4" (UniqueName: "kubernetes.io/secret/62e3760d-4311-49ff-9d7a-dfc2ff961123-ingress-nginx-token-2fnz4") on node "ingress-addon-legacy-260900" DevicePath ""
	
	* 
	* ==> storage-provisioner [b6a74182c286e06ae7e815f374d191afe5ff2803dd8d54fc3e81ef0446c3ef69] <==
	* I0925 10:44:06.002009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 10:44:06.009807       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 10:44:06.009844       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 10:44:06.050050       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 10:44:06.050205       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-260900_e107152c-488b-42c1-b65c-cea29a6cea36!
	I0925 10:44:06.050984       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27f0d206-fc7d-4c34-9d23-4fc6bde3a3a0", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-260900_e107152c-488b-42c1-b65c-cea29a6cea36 became leader
	I0925 10:44:06.150793       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-260900_e107152c-488b-42c1-b65c-cea29a6cea36!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-260900 -n ingress-addon-legacy-260900
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-260900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (180.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-6xmht -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-6xmht -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-6xmht -- sh -c "ping -c 1 192.168.58.1": exit status 1 (159.999198ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-6xmht): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-jnhqs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-jnhqs -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-jnhqs -- sh -c "ping -c 1 192.168.58.1": exit status 1 (170.368772ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-jnhqs): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-529126
helpers_test.go:235: (dbg) docker inspect multinode-529126:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5",
	        "Created": "2023-09-25T10:52:19.524122542Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 97790,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-25T10:52:19.798450719Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:88569b0f9771c4a82a68adf2effda10d3720003bb7c688860ef975d692546171",
	        "ResolvConfPath": "/var/lib/docker/containers/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/hosts",
	        "LogPath": "/var/lib/docker/containers/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5-json.log",
	        "Name": "/multinode-529126",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-529126:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-529126",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/efb13397e35ae401ffc513f94dc3d33455be7ad782d934e0cd3869feb616a6be-init/diff:/var/lib/docker/overlay2/f6c0857361d94c26f0cbf62f9795a30e8812e7f7d65e2dc29161b25ea9a7ede1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efb13397e35ae401ffc513f94dc3d33455be7ad782d934e0cd3869feb616a6be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efb13397e35ae401ffc513f94dc3d33455be7ad782d934e0cd3869feb616a6be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efb13397e35ae401ffc513f94dc3d33455be7ad782d934e0cd3869feb616a6be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-529126",
	                "Source": "/var/lib/docker/volumes/multinode-529126/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-529126",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-529126",
	                "name.minikube.sigs.k8s.io": "multinode-529126",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8bff3e93811663b50ab122e2b9e94a51d151c3938b3fabc9f06b6891a0b25e6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a8bff3e93811",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-529126": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6e3734cea073",
	                        "multinode-529126"
	                    ],
	                    "NetworkID": "b62e509358e70adff0df5a24061870e7757fcc34ea0e1e31c191abf2d54674db",
	                    "EndpointID": "9d022cbe08ea39b1251055aa40995b4f9ed2e8056c5aa87dc9f41bc665fab2b8",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-529126 -n multinode-529126
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-529126 logs -n 25: (1.18548355s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-928828                           | mount-start-2-928828 | jenkins | v1.31.2 | 25 Sep 23 10:51 UTC | 25 Sep 23 10:52 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-928828 ssh -- ls                    | mount-start-2-928828 | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-909279                           | mount-start-1-909279 | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:52 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-928828 ssh -- ls                    | mount-start-2-928828 | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-928828                           | mount-start-2-928828 | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:52 UTC |
	| start   | -p mount-start-2-928828                           | mount-start-2-928828 | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:52 UTC |
	| ssh     | mount-start-2-928828 ssh -- ls                    | mount-start-2-928828 | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-928828                           | mount-start-2-928828 | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:52 UTC |
	| delete  | -p mount-start-1-909279                           | mount-start-1-909279 | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:52 UTC |
	| start   | -p multinode-529126                               | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:52 UTC | 25 Sep 23 10:53 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- apply -f                   | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- rollout                    | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- get pods -o                | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- get pods -o                | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | busybox-5bc68d56bd-6xmht --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | busybox-5bc68d56bd-jnhqs --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | busybox-5bc68d56bd-6xmht --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | busybox-5bc68d56bd-jnhqs --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | busybox-5bc68d56bd-6xmht -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | busybox-5bc68d56bd-jnhqs -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- get pods -o                | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | busybox-5bc68d56bd-6xmht                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC |                     |
	|         | busybox-5bc68d56bd-6xmht -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC | 25 Sep 23 10:53 UTC |
	|         | busybox-5bc68d56bd-jnhqs                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-529126 -- exec                       | multinode-529126     | jenkins | v1.31.2 | 25 Sep 23 10:53 UTC |                     |
	|         | busybox-5bc68d56bd-jnhqs -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 10:52:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 10:52:13.692394   97187 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:52:13.692679   97187 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:52:13.692691   97187 out.go:309] Setting ErrFile to fd 2...
	I0925 10:52:13.692696   97187 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:52:13.692923   97187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 10:52:13.693530   97187 out.go:303] Setting JSON to false
	I0925 10:52:13.694710   97187 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2086,"bootTime":1695637048,"procs":564,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:52:13.694774   97187 start.go:138] virtualization: kvm guest
	I0925 10:52:13.697208   97187 out.go:177] * [multinode-529126] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:52:13.698625   97187 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 10:52:13.698636   97187 notify.go:220] Checking for updates...
	I0925 10:52:13.699905   97187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:52:13.701237   97187 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:52:13.702601   97187 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 10:52:13.703881   97187 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 10:52:13.705107   97187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 10:52:13.706422   97187 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:52:13.730569   97187 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 10:52:13.730669   97187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:52:13.782069   97187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-25 10:52:13.773435142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:52:13.782207   97187 docker.go:294] overlay module found
	I0925 10:52:13.784082   97187 out.go:177] * Using the docker driver based on user configuration
	I0925 10:52:13.785369   97187 start.go:298] selected driver: docker
	I0925 10:52:13.785379   97187 start.go:902] validating driver "docker" against <nil>
	I0925 10:52:13.785389   97187 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 10:52:13.786087   97187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:52:13.838026   97187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-25 10:52:13.829028619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:52:13.838207   97187 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 10:52:13.838469   97187 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 10:52:13.840303   97187 out.go:177] * Using Docker driver with root privileges
	I0925 10:52:13.841629   97187 cni.go:84] Creating CNI manager for ""
	I0925 10:52:13.841647   97187 cni.go:136] 0 nodes found, recommending kindnet
	I0925 10:52:13.841664   97187 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0925 10:52:13.841679   97187 start_flags.go:321] config:
	{Name:multinode-529126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-529126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:52:13.843232   97187 out.go:177] * Starting control plane node multinode-529126 in cluster multinode-529126
	I0925 10:52:13.844624   97187 cache.go:122] Beginning downloading kic base image for docker with crio
	I0925 10:52:13.845905   97187 out.go:177] * Pulling base image ...
	I0925 10:52:13.847004   97187 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:52:13.847037   97187 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0925 10:52:13.847044   97187 cache.go:57] Caching tarball of preloaded images
	I0925 10:52:13.847088   97187 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0925 10:52:13.847116   97187 preload.go:174] Found /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0925 10:52:13.847129   97187 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0925 10:52:13.847550   97187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/config.json ...
	I0925 10:52:13.847577   97187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/config.json: {Name:mk808edf0ba92ef4e1a581509086f35ce3efe0dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:13.862415   97187 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I0925 10:52:13.862444   97187 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I0925 10:52:13.862460   97187 cache.go:195] Successfully downloaded all kic artifacts
	I0925 10:52:13.862493   97187 start.go:365] acquiring machines lock for multinode-529126: {Name:mkca7c304f365a854cbf060c201653b89a513e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 10:52:13.862598   97187 start.go:369] acquired machines lock for "multinode-529126" in 86.379µs
	I0925 10:52:13.862627   97187 start.go:93] Provisioning new machine with config: &{Name:multinode-529126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-529126 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0925 10:52:13.862719   97187 start.go:125] createHost starting for "" (driver="docker")
	I0925 10:52:13.864676   97187 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0925 10:52:13.864900   97187 start.go:159] libmachine.API.Create for "multinode-529126" (driver="docker")
	I0925 10:52:13.864923   97187 client.go:168] LocalClient.Create starting
	I0925 10:52:13.864987   97187 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem
	I0925 10:52:13.865025   97187 main.go:141] libmachine: Decoding PEM data...
	I0925 10:52:13.865049   97187 main.go:141] libmachine: Parsing certificate...
	I0925 10:52:13.865107   97187 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem
	I0925 10:52:13.865127   97187 main.go:141] libmachine: Decoding PEM data...
	I0925 10:52:13.865135   97187 main.go:141] libmachine: Parsing certificate...
	I0925 10:52:13.865433   97187 cli_runner.go:164] Run: docker network inspect multinode-529126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0925 10:52:13.880914   97187 cli_runner.go:211] docker network inspect multinode-529126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0925 10:52:13.880979   97187 network_create.go:281] running [docker network inspect multinode-529126] to gather additional debugging logs...
	I0925 10:52:13.880997   97187 cli_runner.go:164] Run: docker network inspect multinode-529126
	W0925 10:52:13.896241   97187 cli_runner.go:211] docker network inspect multinode-529126 returned with exit code 1
	I0925 10:52:13.896267   97187 network_create.go:284] error running [docker network inspect multinode-529126]: docker network inspect multinode-529126: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-529126 not found
	I0925 10:52:13.896282   97187 network_create.go:286] output of [docker network inspect multinode-529126]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-529126 not found
	
	** /stderr **
	I0925 10:52:13.896331   97187 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0925 10:52:13.911180   97187 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a0ebc8ea7836 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:53:7e:71:4b} reservation:<nil>}
	I0925 10:52:13.911637   97187 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00115ee90}
	I0925 10:52:13.911662   97187 network_create.go:123] attempt to create docker network multinode-529126 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0925 10:52:13.911699   97187 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-529126 multinode-529126
	I0925 10:52:13.960991   97187 network_create.go:107] docker network multinode-529126 192.168.58.0/24 created
	I0925 10:52:13.961017   97187 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-529126" container
	I0925 10:52:13.961079   97187 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0925 10:52:13.975547   97187 cli_runner.go:164] Run: docker volume create multinode-529126 --label name.minikube.sigs.k8s.io=multinode-529126 --label created_by.minikube.sigs.k8s.io=true
	I0925 10:52:13.990901   97187 oci.go:103] Successfully created a docker volume multinode-529126
	I0925 10:52:13.990963   97187 cli_runner.go:164] Run: docker run --rm --name multinode-529126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-529126 --entrypoint /usr/bin/test -v multinode-529126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0925 10:52:14.476577   97187 oci.go:107] Successfully prepared a docker volume multinode-529126
	I0925 10:52:14.476627   97187 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:52:14.476670   97187 kic.go:190] Starting extracting preloaded images to volume ...
	I0925 10:52:14.476738   97187 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-529126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0925 10:52:19.461198   97187 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-529126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.984399188s)
	I0925 10:52:19.461230   97187 kic.go:199] duration metric: took 4.984558 seconds to extract preloaded images to volume
	W0925 10:52:19.461356   97187 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0925 10:52:19.461457   97187 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0925 10:52:19.510222   97187 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-529126 --name multinode-529126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-529126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-529126 --network multinode-529126 --ip 192.168.58.2 --volume multinode-529126:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0925 10:52:19.806949   97187 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Running}}
	I0925 10:52:19.823232   97187 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Status}}
	I0925 10:52:19.840608   97187 cli_runner.go:164] Run: docker exec multinode-529126 stat /var/lib/dpkg/alternatives/iptables
	I0925 10:52:19.880698   97187 oci.go:144] the created container "multinode-529126" has a running status.
	I0925 10:52:19.880726   97187 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa...
	I0925 10:52:20.218963   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0925 10:52:20.219021   97187 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0925 10:52:20.238691   97187 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Status}}
	I0925 10:52:20.254461   97187 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0925 10:52:20.254484   97187 kic_runner.go:114] Args: [docker exec --privileged multinode-529126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0925 10:52:20.309568   97187 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Status}}
	I0925 10:52:20.327580   97187 machine.go:88] provisioning docker machine ...
	I0925 10:52:20.327630   97187 ubuntu.go:169] provisioning hostname "multinode-529126"
	I0925 10:52:20.327686   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:20.342922   97187 main.go:141] libmachine: Using SSH client type: native
	I0925 10:52:20.343256   97187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0925 10:52:20.343271   97187 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-529126 && echo "multinode-529126" | sudo tee /etc/hostname
	I0925 10:52:20.482403   97187 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-529126
	
	I0925 10:52:20.482487   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:20.500845   97187 main.go:141] libmachine: Using SSH client type: native
	I0925 10:52:20.501165   97187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0925 10:52:20.501182   97187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-529126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-529126/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-529126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 10:52:20.624510   97187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 10:52:20.624551   97187 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17297-5744/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-5744/.minikube}
	I0925 10:52:20.624585   97187 ubuntu.go:177] setting up certificates
	I0925 10:52:20.624595   97187 provision.go:83] configureAuth start
	I0925 10:52:20.624661   97187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-529126
	I0925 10:52:20.640038   97187 provision.go:138] copyHostCerts
	I0925 10:52:20.640079   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem
	I0925 10:52:20.640112   97187 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem, removing ...
	I0925 10:52:20.640124   97187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem
	I0925 10:52:20.640200   97187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem (1078 bytes)
	I0925 10:52:20.640298   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem
	I0925 10:52:20.640319   97187 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem, removing ...
	I0925 10:52:20.640326   97187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem
	I0925 10:52:20.640364   97187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem (1123 bytes)
	I0925 10:52:20.640463   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem
	I0925 10:52:20.640497   97187 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem, removing ...
	I0925 10:52:20.640507   97187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem
	I0925 10:52:20.640542   97187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem (1675 bytes)
	I0925 10:52:20.640611   97187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem org=jenkins.multinode-529126 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-529126]
	I0925 10:52:20.820846   97187 provision.go:172] copyRemoteCerts
	I0925 10:52:20.820912   97187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 10:52:20.820945   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:20.837025   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:52:20.928418   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0925 10:52:20.928483   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0925 10:52:20.948821   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0925 10:52:20.948891   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 10:52:20.969047   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0925 10:52:20.969098   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 10:52:20.988925   97187 provision.go:86] duration metric: configureAuth took 364.314784ms
	I0925 10:52:20.988952   97187 ubuntu.go:193] setting minikube options for container-runtime
	I0925 10:52:20.989119   97187 config.go:182] Loaded profile config "multinode-529126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:52:20.989207   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:21.005006   97187 main.go:141] libmachine: Using SSH client type: native
	I0925 10:52:21.005313   97187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0925 10:52:21.005334   97187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0925 10:52:21.209999   97187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0925 10:52:21.210030   97187 machine.go:91] provisioned docker machine in 882.425888ms
	I0925 10:52:21.210042   97187 client.go:171] LocalClient.Create took 7.345111312s
	I0925 10:52:21.210064   97187 start.go:167] duration metric: libmachine.API.Create for "multinode-529126" took 7.345162759s
	I0925 10:52:21.210076   97187 start.go:300] post-start starting for "multinode-529126" (driver="docker")
	I0925 10:52:21.210089   97187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 10:52:21.210147   97187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 10:52:21.210184   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:21.227150   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:52:21.316625   97187 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 10:52:21.319271   97187 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0925 10:52:21.319292   97187 command_runner.go:130] > NAME="Ubuntu"
	I0925 10:52:21.319301   97187 command_runner.go:130] > VERSION_ID="22.04"
	I0925 10:52:21.319310   97187 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0925 10:52:21.319317   97187 command_runner.go:130] > VERSION_CODENAME=jammy
	I0925 10:52:21.319323   97187 command_runner.go:130] > ID=ubuntu
	I0925 10:52:21.319327   97187 command_runner.go:130] > ID_LIKE=debian
	I0925 10:52:21.319332   97187 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0925 10:52:21.319337   97187 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0925 10:52:21.319345   97187 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0925 10:52:21.319353   97187 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0925 10:52:21.319361   97187 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0925 10:52:21.319413   97187 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0925 10:52:21.319438   97187 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0925 10:52:21.319447   97187 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0925 10:52:21.319458   97187 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0925 10:52:21.319468   97187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/addons for local assets ...
	I0925 10:52:21.319524   97187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/files for local assets ...
	I0925 10:52:21.319622   97187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> 125162.pem in /etc/ssl/certs
	I0925 10:52:21.319638   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> /etc/ssl/certs/125162.pem
	I0925 10:52:21.319744   97187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 10:52:21.326864   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem --> /etc/ssl/certs/125162.pem (1708 bytes)
	I0925 10:52:21.346759   97187 start.go:303] post-start completed in 136.666068ms
	I0925 10:52:21.347087   97187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-529126
	I0925 10:52:21.362477   97187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/config.json ...
	I0925 10:52:21.362700   97187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 10:52:21.362750   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:21.378839   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:52:21.464749   97187 command_runner.go:130] > 19%!
	(MISSING)I0925 10:52:21.464906   97187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0925 10:52:21.468511   97187 command_runner.go:130] > 237G
	I0925 10:52:21.468681   97187 start.go:128] duration metric: createHost completed in 7.605950197s
	I0925 10:52:21.468703   97187 start.go:83] releasing machines lock for "multinode-529126", held for 7.606088885s
	I0925 10:52:21.468755   97187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-529126
	I0925 10:52:21.484114   97187 ssh_runner.go:195] Run: cat /version.json
	I0925 10:52:21.484153   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:21.484207   97187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 10:52:21.484265   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:21.500475   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:52:21.503046   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:52:21.672952   97187 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0925 10:52:21.673025   97187 command_runner.go:130] > {"iso_version": "v1.31.0-1694625400-17243", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "c590c2ca0a7db48c4b84c041c2699711a39ab56a"}
	I0925 10:52:21.673139   97187 ssh_runner.go:195] Run: systemctl --version
	I0925 10:52:21.677041   97187 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I0925 10:52:21.677078   97187 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0925 10:52:21.677130   97187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0925 10:52:21.813184   97187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0925 10:52:21.816985   97187 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0925 10:52:21.817010   97187 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0925 10:52:21.817019   97187 command_runner.go:130] > Device: 37h/55d	Inode: 540251      Links: 1
	I0925 10:52:21.817029   97187 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0925 10:52:21.817038   97187 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0925 10:52:21.817046   97187 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0925 10:52:21.817051   97187 command_runner.go:130] > Change: 2023-09-25 10:33:46.731088186 +0000
	I0925 10:52:21.817058   97187 command_runner.go:130] >  Birth: 2023-09-25 10:33:46.731088186 +0000
	I0925 10:52:21.817237   97187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 10:52:21.833975   97187 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0925 10:52:21.834053   97187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 10:52:21.858262   97187 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0925 10:52:21.858298   97187 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0925 10:52:21.858307   97187 start.go:469] detecting cgroup driver to use...
	I0925 10:52:21.858340   97187 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0925 10:52:21.858378   97187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 10:52:21.870669   97187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 10:52:21.879677   97187 docker.go:197] disabling cri-docker service (if available) ...
	I0925 10:52:21.879717   97187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0925 10:52:21.890996   97187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0925 10:52:21.902258   97187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0925 10:52:21.978237   97187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0925 10:52:22.056471   97187 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0925 10:52:22.056502   97187 docker.go:213] disabling docker service ...
	I0925 10:52:22.056539   97187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0925 10:52:22.072916   97187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0925 10:52:22.082772   97187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0925 10:52:22.155879   97187 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0925 10:52:22.155945   97187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0925 10:52:22.236204   97187 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0925 10:52:22.236273   97187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0925 10:52:22.245841   97187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 10:52:22.258526   97187 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0925 10:52:22.259260   97187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0925 10:52:22.259309   97187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:52:22.267375   97187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0925 10:52:22.267426   97187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:52:22.275388   97187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:52:22.283177   97187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:52:22.290887   97187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 10:52:22.298020   97187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 10:52:22.303979   97187 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0925 10:52:22.304557   97187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 10:52:22.311264   97187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 10:52:22.387023   97187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0925 10:52:22.483366   97187 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0925 10:52:22.483433   97187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0925 10:52:22.486465   97187 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0925 10:52:22.486486   97187 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0925 10:52:22.486496   97187 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I0925 10:52:22.486506   97187 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0925 10:52:22.486514   97187 command_runner.go:130] > Access: 2023-09-25 10:52:22.470117429 +0000
	I0925 10:52:22.486524   97187 command_runner.go:130] > Modify: 2023-09-25 10:52:22.470117429 +0000
	I0925 10:52:22.486545   97187 command_runner.go:130] > Change: 2023-09-25 10:52:22.470117429 +0000
	I0925 10:52:22.486552   97187 command_runner.go:130] >  Birth: -
	I0925 10:52:22.486581   97187 start.go:537] Will wait 60s for crictl version
	I0925 10:52:22.486620   97187 ssh_runner.go:195] Run: which crictl
	I0925 10:52:22.489395   97187 command_runner.go:130] > /usr/bin/crictl
	I0925 10:52:22.489457   97187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 10:52:22.518527   97187 command_runner.go:130] > Version:  0.1.0
	I0925 10:52:22.518548   97187 command_runner.go:130] > RuntimeName:  cri-o
	I0925 10:52:22.518556   97187 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0925 10:52:22.518568   97187 command_runner.go:130] > RuntimeApiVersion:  v1
	I0925 10:52:22.520239   97187 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0925 10:52:22.520312   97187 ssh_runner.go:195] Run: crio --version
	I0925 10:52:22.550972   97187 command_runner.go:130] > crio version 1.24.6
	I0925 10:52:22.550996   97187 command_runner.go:130] > Version:          1.24.6
	I0925 10:52:22.551007   97187 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0925 10:52:22.551015   97187 command_runner.go:130] > GitTreeState:     clean
	I0925 10:52:22.551025   97187 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0925 10:52:22.551033   97187 command_runner.go:130] > GoVersion:        go1.18.2
	I0925 10:52:22.551038   97187 command_runner.go:130] > Compiler:         gc
	I0925 10:52:22.551044   97187 command_runner.go:130] > Platform:         linux/amd64
	I0925 10:52:22.551052   97187 command_runner.go:130] > Linkmode:         dynamic
	I0925 10:52:22.551071   97187 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0925 10:52:22.551082   97187 command_runner.go:130] > SeccompEnabled:   true
	I0925 10:52:22.551092   97187 command_runner.go:130] > AppArmorEnabled:  false
	I0925 10:52:22.552501   97187 ssh_runner.go:195] Run: crio --version
	I0925 10:52:22.584582   97187 command_runner.go:130] > crio version 1.24.6
	I0925 10:52:22.584601   97187 command_runner.go:130] > Version:          1.24.6
	I0925 10:52:22.584608   97187 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0925 10:52:22.584612   97187 command_runner.go:130] > GitTreeState:     clean
	I0925 10:52:22.584618   97187 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0925 10:52:22.584624   97187 command_runner.go:130] > GoVersion:        go1.18.2
	I0925 10:52:22.584628   97187 command_runner.go:130] > Compiler:         gc
	I0925 10:52:22.584650   97187 command_runner.go:130] > Platform:         linux/amd64
	I0925 10:52:22.584659   97187 command_runner.go:130] > Linkmode:         dynamic
	I0925 10:52:22.584672   97187 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0925 10:52:22.584679   97187 command_runner.go:130] > SeccompEnabled:   true
	I0925 10:52:22.584685   97187 command_runner.go:130] > AppArmorEnabled:  false
	I0925 10:52:22.586731   97187 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I0925 10:52:22.588105   97187 cli_runner.go:164] Run: docker network inspect multinode-529126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0925 10:52:22.603361   97187 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0925 10:52:22.606485   97187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 10:52:22.616447   97187 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:52:22.616496   97187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0925 10:52:22.659895   97187 command_runner.go:130] > {
	I0925 10:52:22.659921   97187 command_runner.go:130] >   "images": [
	I0925 10:52:22.659930   97187 command_runner.go:130] >     {
	I0925 10:52:22.659942   97187 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0925 10:52:22.659951   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.659961   97187 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0925 10:52:22.659970   97187 command_runner.go:130] >       ],
	I0925 10:52:22.659980   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.659991   97187 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0925 10:52:22.660001   97187 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0925 10:52:22.660008   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660013   97187 command_runner.go:130] >       "size": "65258016",
	I0925 10:52:22.660019   97187 command_runner.go:130] >       "uid": null,
	I0925 10:52:22.660024   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660035   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660042   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660046   97187 command_runner.go:130] >     },
	I0925 10:52:22.660050   97187 command_runner.go:130] >     {
	I0925 10:52:22.660058   97187 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0925 10:52:22.660065   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.660071   97187 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0925 10:52:22.660077   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660082   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.660092   97187 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0925 10:52:22.660102   97187 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0925 10:52:22.660108   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660116   97187 command_runner.go:130] >       "size": "31470524",
	I0925 10:52:22.660122   97187 command_runner.go:130] >       "uid": null,
	I0925 10:52:22.660127   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660133   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660138   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660141   97187 command_runner.go:130] >     },
	I0925 10:52:22.660148   97187 command_runner.go:130] >     {
	I0925 10:52:22.660154   97187 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0925 10:52:22.660160   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.660166   97187 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0925 10:52:22.660172   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660177   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.660186   97187 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0925 10:52:22.660196   97187 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0925 10:52:22.660202   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660206   97187 command_runner.go:130] >       "size": "53621675",
	I0925 10:52:22.660212   97187 command_runner.go:130] >       "uid": null,
	I0925 10:52:22.660217   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660223   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660227   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660233   97187 command_runner.go:130] >     },
	I0925 10:52:22.660237   97187 command_runner.go:130] >     {
	I0925 10:52:22.660246   97187 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0925 10:52:22.660252   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.660258   97187 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0925 10:52:22.660264   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660268   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.660277   97187 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0925 10:52:22.660287   97187 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0925 10:52:22.660297   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660301   97187 command_runner.go:130] >       "size": "295456551",
	I0925 10:52:22.660307   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.660312   97187 command_runner.go:130] >         "value": "0"
	I0925 10:52:22.660318   97187 command_runner.go:130] >       },
	I0925 10:52:22.660322   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660331   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660338   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660341   97187 command_runner.go:130] >     },
	I0925 10:52:22.660348   97187 command_runner.go:130] >     {
	I0925 10:52:22.660354   97187 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I0925 10:52:22.660360   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.660366   97187 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I0925 10:52:22.660372   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660376   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.660385   97187 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I0925 10:52:22.660395   97187 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I0925 10:52:22.660407   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660415   97187 command_runner.go:130] >       "size": "127149008",
	I0925 10:52:22.660419   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.660426   97187 command_runner.go:130] >         "value": "0"
	I0925 10:52:22.660430   97187 command_runner.go:130] >       },
	I0925 10:52:22.660437   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660441   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660448   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660451   97187 command_runner.go:130] >     },
	I0925 10:52:22.660458   97187 command_runner.go:130] >     {
	I0925 10:52:22.660464   97187 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I0925 10:52:22.660471   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.660476   97187 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I0925 10:52:22.660480   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660487   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.660494   97187 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I0925 10:52:22.660504   97187 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I0925 10:52:22.660510   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660515   97187 command_runner.go:130] >       "size": "123171638",
	I0925 10:52:22.660521   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.660527   97187 command_runner.go:130] >         "value": "0"
	I0925 10:52:22.660541   97187 command_runner.go:130] >       },
	I0925 10:52:22.660548   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660557   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660566   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660575   97187 command_runner.go:130] >     },
	I0925 10:52:22.660583   97187 command_runner.go:130] >     {
	I0925 10:52:22.660591   97187 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I0925 10:52:22.660598   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.660603   97187 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I0925 10:52:22.660612   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660618   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.660626   97187 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I0925 10:52:22.660651   97187 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I0925 10:52:22.660661   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660669   97187 command_runner.go:130] >       "size": "74687895",
	I0925 10:52:22.660679   97187 command_runner.go:130] >       "uid": null,
	I0925 10:52:22.660685   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660690   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660696   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660700   97187 command_runner.go:130] >     },
	I0925 10:52:22.660706   97187 command_runner.go:130] >     {
	I0925 10:52:22.660712   97187 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I0925 10:52:22.660719   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.660724   97187 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I0925 10:52:22.660730   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660734   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.660785   97187 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I0925 10:52:22.660803   97187 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I0925 10:52:22.660810   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660821   97187 command_runner.go:130] >       "size": "61485878",
	I0925 10:52:22.660828   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.660838   97187 command_runner.go:130] >         "value": "0"
	I0925 10:52:22.660847   97187 command_runner.go:130] >       },
	I0925 10:52:22.660857   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660864   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660868   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660874   97187 command_runner.go:130] >     },
	I0925 10:52:22.660878   97187 command_runner.go:130] >     {
	I0925 10:52:22.660886   97187 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0925 10:52:22.660893   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.660898   97187 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0925 10:52:22.660904   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660908   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.660918   97187 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0925 10:52:22.660927   97187 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0925 10:52:22.660933   97187 command_runner.go:130] >       ],
	I0925 10:52:22.660937   97187 command_runner.go:130] >       "size": "750414",
	I0925 10:52:22.660943   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.660948   97187 command_runner.go:130] >         "value": "65535"
	I0925 10:52:22.660954   97187 command_runner.go:130] >       },
	I0925 10:52:22.660958   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.660967   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.660973   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.660977   97187 command_runner.go:130] >     }
	I0925 10:52:22.660980   97187 command_runner.go:130] >   ]
	I0925 10:52:22.660984   97187 command_runner.go:130] > }
	I0925 10:52:22.662082   97187 crio.go:496] all images are preloaded for cri-o runtime.
	I0925 10:52:22.662100   97187 crio.go:415] Images already preloaded, skipping extraction
	I0925 10:52:22.662153   97187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0925 10:52:22.692325   97187 command_runner.go:130] > {
	I0925 10:52:22.692344   97187 command_runner.go:130] >   "images": [
	I0925 10:52:22.692348   97187 command_runner.go:130] >     {
	I0925 10:52:22.692356   97187 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0925 10:52:22.692361   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.692367   97187 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0925 10:52:22.692371   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692376   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.692398   97187 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0925 10:52:22.692408   97187 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0925 10:52:22.692418   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692422   97187 command_runner.go:130] >       "size": "65258016",
	I0925 10:52:22.692427   97187 command_runner.go:130] >       "uid": null,
	I0925 10:52:22.692431   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.692439   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.692443   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.692447   97187 command_runner.go:130] >     },
	I0925 10:52:22.692451   97187 command_runner.go:130] >     {
	I0925 10:52:22.692458   97187 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0925 10:52:22.692464   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.692469   97187 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0925 10:52:22.692473   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692477   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.692485   97187 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0925 10:52:22.692492   97187 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0925 10:52:22.692498   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692504   97187 command_runner.go:130] >       "size": "31470524",
	I0925 10:52:22.692507   97187 command_runner.go:130] >       "uid": null,
	I0925 10:52:22.692512   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.692516   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.692520   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.692523   97187 command_runner.go:130] >     },
	I0925 10:52:22.692527   97187 command_runner.go:130] >     {
	I0925 10:52:22.692532   97187 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0925 10:52:22.692537   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.692541   97187 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0925 10:52:22.692545   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692549   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.692555   97187 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0925 10:52:22.692562   97187 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0925 10:52:22.692566   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692570   97187 command_runner.go:130] >       "size": "53621675",
	I0925 10:52:22.692573   97187 command_runner.go:130] >       "uid": null,
	I0925 10:52:22.692577   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.692582   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.692586   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.692589   97187 command_runner.go:130] >     },
	I0925 10:52:22.692593   97187 command_runner.go:130] >     {
	I0925 10:52:22.692598   97187 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0925 10:52:22.692605   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.692610   97187 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0925 10:52:22.692616   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692620   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.692626   97187 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0925 10:52:22.692653   97187 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0925 10:52:22.692664   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692673   97187 command_runner.go:130] >       "size": "295456551",
	I0925 10:52:22.692677   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.692681   97187 command_runner.go:130] >         "value": "0"
	I0925 10:52:22.692687   97187 command_runner.go:130] >       },
	I0925 10:52:22.692691   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.692695   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.692699   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.692705   97187 command_runner.go:130] >     },
	I0925 10:52:22.692716   97187 command_runner.go:130] >     {
	I0925 10:52:22.692725   97187 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I0925 10:52:22.692729   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.692734   97187 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I0925 10:52:22.692741   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692745   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.692753   97187 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I0925 10:52:22.692763   97187 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I0925 10:52:22.692768   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692773   97187 command_runner.go:130] >       "size": "127149008",
	I0925 10:52:22.692779   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.692783   97187 command_runner.go:130] >         "value": "0"
	I0925 10:52:22.692791   97187 command_runner.go:130] >       },
	I0925 10:52:22.692795   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.692799   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.692805   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.692808   97187 command_runner.go:130] >     },
	I0925 10:52:22.692815   97187 command_runner.go:130] >     {
	I0925 10:52:22.692822   97187 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I0925 10:52:22.692828   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.692834   97187 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I0925 10:52:22.692840   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692844   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.692851   97187 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I0925 10:52:22.692861   97187 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I0925 10:52:22.692867   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692871   97187 command_runner.go:130] >       "size": "123171638",
	I0925 10:52:22.692875   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.692879   97187 command_runner.go:130] >         "value": "0"
	I0925 10:52:22.692885   97187 command_runner.go:130] >       },
	I0925 10:52:22.692889   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.692896   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.692900   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.692903   97187 command_runner.go:130] >     },
	I0925 10:52:22.692907   97187 command_runner.go:130] >     {
	I0925 10:52:22.692913   97187 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I0925 10:52:22.692920   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.692925   97187 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I0925 10:52:22.692931   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692939   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.692949   97187 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I0925 10:52:22.692959   97187 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I0925 10:52:22.692962   97187 command_runner.go:130] >       ],
	I0925 10:52:22.692969   97187 command_runner.go:130] >       "size": "74687895",
	I0925 10:52:22.692976   97187 command_runner.go:130] >       "uid": null,
	I0925 10:52:22.692982   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.692986   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.692992   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.692995   97187 command_runner.go:130] >     },
	I0925 10:52:22.693002   97187 command_runner.go:130] >     {
	I0925 10:52:22.693008   97187 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I0925 10:52:22.693012   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.693018   97187 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I0925 10:52:22.693022   97187 command_runner.go:130] >       ],
	I0925 10:52:22.693027   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.693045   97187 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I0925 10:52:22.693054   97187 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I0925 10:52:22.693058   97187 command_runner.go:130] >       ],
	I0925 10:52:22.693063   97187 command_runner.go:130] >       "size": "61485878",
	I0925 10:52:22.693069   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.693074   97187 command_runner.go:130] >         "value": "0"
	I0925 10:52:22.693078   97187 command_runner.go:130] >       },
	I0925 10:52:22.693082   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.693087   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.693091   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.693096   97187 command_runner.go:130] >     },
	I0925 10:52:22.693100   97187 command_runner.go:130] >     {
	I0925 10:52:22.693108   97187 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0925 10:52:22.693113   97187 command_runner.go:130] >       "repoTags": [
	I0925 10:52:22.693120   97187 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0925 10:52:22.693123   97187 command_runner.go:130] >       ],
	I0925 10:52:22.693130   97187 command_runner.go:130] >       "repoDigests": [
	I0925 10:52:22.693137   97187 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0925 10:52:22.693146   97187 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0925 10:52:22.693150   97187 command_runner.go:130] >       ],
	I0925 10:52:22.693154   97187 command_runner.go:130] >       "size": "750414",
	I0925 10:52:22.693160   97187 command_runner.go:130] >       "uid": {
	I0925 10:52:22.693164   97187 command_runner.go:130] >         "value": "65535"
	I0925 10:52:22.693170   97187 command_runner.go:130] >       },
	I0925 10:52:22.693174   97187 command_runner.go:130] >       "username": "",
	I0925 10:52:22.693178   97187 command_runner.go:130] >       "spec": null,
	I0925 10:52:22.693183   97187 command_runner.go:130] >       "pinned": false
	I0925 10:52:22.693189   97187 command_runner.go:130] >     }
	I0925 10:52:22.693192   97187 command_runner.go:130] >   ]
	I0925 10:52:22.693195   97187 command_runner.go:130] > }
	I0925 10:52:22.693293   97187 crio.go:496] all images are preloaded for cri-o runtime.
	I0925 10:52:22.693302   97187 cache_images.go:84] Images are preloaded, skipping loading
	I0925 10:52:22.693355   97187 ssh_runner.go:195] Run: crio config
	I0925 10:52:22.728446   97187 command_runner.go:130] ! time="2023-09-25 10:52:22.728004932Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0925 10:52:22.728476   97187 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0925 10:52:22.733549   97187 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0925 10:52:22.733578   97187 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0925 10:52:22.733590   97187 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0925 10:52:22.733596   97187 command_runner.go:130] > #
	I0925 10:52:22.733607   97187 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0925 10:52:22.733621   97187 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0925 10:52:22.733635   97187 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0925 10:52:22.733653   97187 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0925 10:52:22.733663   97187 command_runner.go:130] > # reload'.
	I0925 10:52:22.733677   97187 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0925 10:52:22.733690   97187 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0925 10:52:22.733702   97187 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0925 10:52:22.733711   97187 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0925 10:52:22.733715   97187 command_runner.go:130] > [crio]
	I0925 10:52:22.733723   97187 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0925 10:52:22.733730   97187 command_runner.go:130] > # containers images, in this directory.
	I0925 10:52:22.733738   97187 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0925 10:52:22.733747   97187 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0925 10:52:22.733755   97187 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0925 10:52:22.733762   97187 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0925 10:52:22.733770   97187 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0925 10:52:22.733778   97187 command_runner.go:130] > # storage_driver = "vfs"
	I0925 10:52:22.733783   97187 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0925 10:52:22.733791   97187 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0925 10:52:22.733798   97187 command_runner.go:130] > # storage_option = [
	I0925 10:52:22.733802   97187 command_runner.go:130] > # ]
	I0925 10:52:22.733811   97187 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0925 10:52:22.733819   97187 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0925 10:52:22.733827   97187 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0925 10:52:22.733833   97187 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0925 10:52:22.733842   97187 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0925 10:52:22.733848   97187 command_runner.go:130] > # always happen on a node reboot
	I0925 10:52:22.733853   97187 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0925 10:52:22.733861   97187 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0925 10:52:22.733869   97187 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0925 10:52:22.733879   97187 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0925 10:52:22.733886   97187 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0925 10:52:22.733894   97187 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0925 10:52:22.733904   97187 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0925 10:52:22.733910   97187 command_runner.go:130] > # internal_wipe = true
	I0925 10:52:22.733916   97187 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0925 10:52:22.733924   97187 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0925 10:52:22.733932   97187 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0925 10:52:22.733940   97187 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0925 10:52:22.733949   97187 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0925 10:52:22.733955   97187 command_runner.go:130] > [crio.api]
	I0925 10:52:22.733961   97187 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0925 10:52:22.733968   97187 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0925 10:52:22.733973   97187 command_runner.go:130] > # IP address on which the stream server will listen.
	I0925 10:52:22.733979   97187 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0925 10:52:22.733986   97187 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0925 10:52:22.733994   97187 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0925 10:52:22.734001   97187 command_runner.go:130] > # stream_port = "0"
	I0925 10:52:22.734006   97187 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0925 10:52:22.734013   97187 command_runner.go:130] > # stream_enable_tls = false
	I0925 10:52:22.734019   97187 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0925 10:52:22.734026   97187 command_runner.go:130] > # stream_idle_timeout = ""
	I0925 10:52:22.734036   97187 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0925 10:52:22.734045   97187 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0925 10:52:22.734051   97187 command_runner.go:130] > # minutes.
	I0925 10:52:22.734055   97187 command_runner.go:130] > # stream_tls_cert = ""
	I0925 10:52:22.734064   97187 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0925 10:52:22.734072   97187 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0925 10:52:22.734077   97187 command_runner.go:130] > # stream_tls_key = ""
	I0925 10:52:22.734084   97187 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0925 10:52:22.734091   97187 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0925 10:52:22.734098   97187 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0925 10:52:22.734106   97187 command_runner.go:130] > # stream_tls_ca = ""
	I0925 10:52:22.734113   97187 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0925 10:52:22.734120   97187 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0925 10:52:22.734127   97187 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0925 10:52:22.734133   97187 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0925 10:52:22.734149   97187 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0925 10:52:22.734156   97187 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0925 10:52:22.734161   97187 command_runner.go:130] > [crio.runtime]
	I0925 10:52:22.734167   97187 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0925 10:52:22.734175   97187 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0925 10:52:22.734182   97187 command_runner.go:130] > # "nofile=1024:2048"
	I0925 10:52:22.734188   97187 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0925 10:52:22.734196   97187 command_runner.go:130] > # default_ulimits = [
	I0925 10:52:22.734202   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734208   97187 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0925 10:52:22.734214   97187 command_runner.go:130] > # no_pivot = false
	I0925 10:52:22.734220   97187 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0925 10:52:22.734228   97187 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0925 10:52:22.734235   97187 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0925 10:52:22.734243   97187 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0925 10:52:22.734250   97187 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0925 10:52:22.734257   97187 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0925 10:52:22.734264   97187 command_runner.go:130] > # conmon = ""
	I0925 10:52:22.734272   97187 command_runner.go:130] > # Cgroup setting for conmon
	I0925 10:52:22.734279   97187 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0925 10:52:22.734285   97187 command_runner.go:130] > conmon_cgroup = "pod"
	I0925 10:52:22.734292   97187 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0925 10:52:22.734299   97187 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0925 10:52:22.734309   97187 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0925 10:52:22.734315   97187 command_runner.go:130] > # conmon_env = [
	I0925 10:52:22.734319   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734327   97187 command_runner.go:130] > # Additional environment variables to set for all the
	I0925 10:52:22.734336   97187 command_runner.go:130] > # containers. These are overridden if set in the
	I0925 10:52:22.734342   97187 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0925 10:52:22.734349   97187 command_runner.go:130] > # default_env = [
	I0925 10:52:22.734352   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734360   97187 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0925 10:52:22.734367   97187 command_runner.go:130] > # selinux = false
	I0925 10:52:22.734374   97187 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0925 10:52:22.734383   97187 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0925 10:52:22.734391   97187 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0925 10:52:22.734397   97187 command_runner.go:130] > # seccomp_profile = ""
	I0925 10:52:22.734403   97187 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0925 10:52:22.734410   97187 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0925 10:52:22.734421   97187 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0925 10:52:22.734427   97187 command_runner.go:130] > # which might increase security.
	I0925 10:52:22.734432   97187 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0925 10:52:22.734440   97187 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0925 10:52:22.734452   97187 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0925 10:52:22.734461   97187 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0925 10:52:22.734470   97187 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0925 10:52:22.734475   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:52:22.734482   97187 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0925 10:52:22.734488   97187 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0925 10:52:22.734494   97187 command_runner.go:130] > # the cgroup blockio controller.
	I0925 10:52:22.734499   97187 command_runner.go:130] > # blockio_config_file = ""
	I0925 10:52:22.734507   97187 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0925 10:52:22.734515   97187 command_runner.go:130] > # irqbalance daemon.
	I0925 10:52:22.734520   97187 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0925 10:52:22.734529   97187 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0925 10:52:22.734538   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:52:22.734544   97187 command_runner.go:130] > # rdt_config_file = ""
	I0925 10:52:22.734550   97187 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0925 10:52:22.734557   97187 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0925 10:52:22.734563   97187 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0925 10:52:22.734570   97187 command_runner.go:130] > # separate_pull_cgroup = ""
	I0925 10:52:22.734577   97187 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0925 10:52:22.734586   97187 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0925 10:52:22.734592   97187 command_runner.go:130] > # will be added.
	I0925 10:52:22.734596   97187 command_runner.go:130] > # default_capabilities = [
	I0925 10:52:22.734602   97187 command_runner.go:130] > # 	"CHOWN",
	I0925 10:52:22.734607   97187 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0925 10:52:22.734611   97187 command_runner.go:130] > # 	"FSETID",
	I0925 10:52:22.734618   97187 command_runner.go:130] > # 	"FOWNER",
	I0925 10:52:22.734622   97187 command_runner.go:130] > # 	"SETGID",
	I0925 10:52:22.734628   97187 command_runner.go:130] > # 	"SETUID",
	I0925 10:52:22.734632   97187 command_runner.go:130] > # 	"SETPCAP",
	I0925 10:52:22.734638   97187 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0925 10:52:22.734642   97187 command_runner.go:130] > # 	"KILL",
	I0925 10:52:22.734648   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734656   97187 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0925 10:52:22.734665   97187 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0925 10:52:22.734672   97187 command_runner.go:130] > # add_inheritable_capabilities = true
	I0925 10:52:22.734678   97187 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0925 10:52:22.734687   97187 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0925 10:52:22.734694   97187 command_runner.go:130] > # default_sysctls = [
	I0925 10:52:22.734698   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734705   97187 command_runner.go:130] > # List of devices on the host that a
	I0925 10:52:22.734711   97187 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0925 10:52:22.734717   97187 command_runner.go:130] > # allowed_devices = [
	I0925 10:52:22.734721   97187 command_runner.go:130] > # 	"/dev/fuse",
	I0925 10:52:22.734727   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734733   97187 command_runner.go:130] > # List of additional devices. specified as
	I0925 10:52:22.734752   97187 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0925 10:52:22.734760   97187 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0925 10:52:22.734769   97187 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0925 10:52:22.734776   97187 command_runner.go:130] > # additional_devices = [
	I0925 10:52:22.734779   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734787   97187 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0925 10:52:22.734794   97187 command_runner.go:130] > # cdi_spec_dirs = [
	I0925 10:52:22.734798   97187 command_runner.go:130] > # 	"/etc/cdi",
	I0925 10:52:22.734804   97187 command_runner.go:130] > # 	"/var/run/cdi",
	I0925 10:52:22.734808   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734817   97187 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0925 10:52:22.734825   97187 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0925 10:52:22.734832   97187 command_runner.go:130] > # Defaults to false.
	I0925 10:52:22.734837   97187 command_runner.go:130] > # device_ownership_from_security_context = false
	I0925 10:52:22.734845   97187 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0925 10:52:22.734853   97187 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0925 10:52:22.734860   97187 command_runner.go:130] > # hooks_dir = [
	I0925 10:52:22.734865   97187 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0925 10:52:22.734868   97187 command_runner.go:130] > # ]
	I0925 10:52:22.734877   97187 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0925 10:52:22.734886   97187 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0925 10:52:22.734894   97187 command_runner.go:130] > # its default mounts from the following two files:
	I0925 10:52:22.734897   97187 command_runner.go:130] > #
	I0925 10:52:22.734905   97187 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0925 10:52:22.734914   97187 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0925 10:52:22.734921   97187 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0925 10:52:22.734927   97187 command_runner.go:130] > #
	I0925 10:52:22.734934   97187 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0925 10:52:22.734943   97187 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0925 10:52:22.734951   97187 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0925 10:52:22.734959   97187 command_runner.go:130] > #      only add mounts it finds in this file.
	I0925 10:52:22.734962   97187 command_runner.go:130] > #
	I0925 10:52:22.734969   97187 command_runner.go:130] > # default_mounts_file = ""
	I0925 10:52:22.734974   97187 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0925 10:52:22.734983   97187 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0925 10:52:22.734989   97187 command_runner.go:130] > # pids_limit = 0
	I0925 10:52:22.734995   97187 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0925 10:52:22.735003   97187 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0925 10:52:22.735012   97187 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0925 10:52:22.735023   97187 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0925 10:52:22.735033   97187 command_runner.go:130] > # log_size_max = -1
	I0925 10:52:22.735043   97187 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0925 10:52:22.735049   97187 command_runner.go:130] > # log_to_journald = false
	I0925 10:52:22.735055   97187 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0925 10:52:22.735063   97187 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0925 10:52:22.735071   97187 command_runner.go:130] > # Path to directory for container attach sockets.
	I0925 10:52:22.735077   97187 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0925 10:52:22.735085   97187 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0925 10:52:22.735091   97187 command_runner.go:130] > # bind_mount_prefix = ""
	I0925 10:52:22.735097   97187 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0925 10:52:22.735103   97187 command_runner.go:130] > # read_only = false
	I0925 10:52:22.735110   97187 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0925 10:52:22.735118   97187 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0925 10:52:22.735122   97187 command_runner.go:130] > # live configuration reload.
	I0925 10:52:22.735129   97187 command_runner.go:130] > # log_level = "info"
	I0925 10:52:22.735134   97187 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0925 10:52:22.735142   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:52:22.735146   97187 command_runner.go:130] > # log_filter = ""
	I0925 10:52:22.735154   97187 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0925 10:52:22.735162   97187 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0925 10:52:22.735169   97187 command_runner.go:130] > # separated by comma.
	I0925 10:52:22.735173   97187 command_runner.go:130] > # uid_mappings = ""
	I0925 10:52:22.735182   97187 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0925 10:52:22.735191   97187 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0925 10:52:22.735197   97187 command_runner.go:130] > # separated by comma.
	I0925 10:52:22.735201   97187 command_runner.go:130] > # gid_mappings = ""
	I0925 10:52:22.735209   97187 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0925 10:52:22.735215   97187 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0925 10:52:22.735224   97187 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0925 10:52:22.735230   97187 command_runner.go:130] > # minimum_mappable_uid = -1
	I0925 10:52:22.735237   97187 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0925 10:52:22.735245   97187 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0925 10:52:22.735253   97187 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0925 10:52:22.735260   97187 command_runner.go:130] > # minimum_mappable_gid = -1
	I0925 10:52:22.735266   97187 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0925 10:52:22.735274   97187 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0925 10:52:22.735282   97187 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0925 10:52:22.735288   97187 command_runner.go:130] > # ctr_stop_timeout = 30
	I0925 10:52:22.735294   97187 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0925 10:52:22.735304   97187 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0925 10:52:22.735309   97187 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0925 10:52:22.735316   97187 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0925 10:52:22.735320   97187 command_runner.go:130] > # drop_infra_ctr = true
	I0925 10:52:22.735329   97187 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0925 10:52:22.735337   97187 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0925 10:52:22.735347   97187 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0925 10:52:22.735353   97187 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0925 10:52:22.735359   97187 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0925 10:52:22.735366   97187 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0925 10:52:22.735373   97187 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0925 10:52:22.735380   97187 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0925 10:52:22.735386   97187 command_runner.go:130] > # pinns_path = ""
	I0925 10:52:22.735392   97187 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0925 10:52:22.735401   97187 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0925 10:52:22.735407   97187 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0925 10:52:22.735414   97187 command_runner.go:130] > # default_runtime = "runc"
	I0925 10:52:22.735419   97187 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0925 10:52:22.735428   97187 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0925 10:52:22.735439   97187 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0925 10:52:22.735447   97187 command_runner.go:130] > # creation as a file is not desired either.
	I0925 10:52:22.735456   97187 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0925 10:52:22.735463   97187 command_runner.go:130] > # the hostname is being managed dynamically.
	I0925 10:52:22.735467   97187 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0925 10:52:22.735471   97187 command_runner.go:130] > # ]
	I0925 10:52:22.735479   97187 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0925 10:52:22.735487   97187 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0925 10:52:22.735496   97187 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0925 10:52:22.735504   97187 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0925 10:52:22.735508   97187 command_runner.go:130] > #
	I0925 10:52:22.735515   97187 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0925 10:52:22.735520   97187 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0925 10:52:22.735527   97187 command_runner.go:130] > #  runtime_type = "oci"
	I0925 10:52:22.735532   97187 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0925 10:52:22.735539   97187 command_runner.go:130] > #  privileged_without_host_devices = false
	I0925 10:52:22.735543   97187 command_runner.go:130] > #  allowed_annotations = []
	I0925 10:52:22.735549   97187 command_runner.go:130] > # Where:
	I0925 10:52:22.735554   97187 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0925 10:52:22.735563   97187 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0925 10:52:22.735571   97187 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0925 10:52:22.735579   97187 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0925 10:52:22.735585   97187 command_runner.go:130] > #   in $PATH.
	I0925 10:52:22.735592   97187 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0925 10:52:22.735599   97187 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0925 10:52:22.735605   97187 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0925 10:52:22.735611   97187 command_runner.go:130] > #   state.
	I0925 10:52:22.735617   97187 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0925 10:52:22.735626   97187 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0925 10:52:22.735634   97187 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0925 10:52:22.735642   97187 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0925 10:52:22.735650   97187 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0925 10:52:22.735659   97187 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0925 10:52:22.735667   97187 command_runner.go:130] > #   The currently recognized values are:
	I0925 10:52:22.735673   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0925 10:52:22.735682   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0925 10:52:22.735691   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0925 10:52:22.735697   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0925 10:52:22.735707   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0925 10:52:22.735716   97187 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0925 10:52:22.735725   97187 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0925 10:52:22.735733   97187 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0925 10:52:22.735740   97187 command_runner.go:130] > #   should be moved to the container's cgroup
	I0925 10:52:22.735747   97187 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0925 10:52:22.735752   97187 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0925 10:52:22.735759   97187 command_runner.go:130] > runtime_type = "oci"
	I0925 10:52:22.735763   97187 command_runner.go:130] > runtime_root = "/run/runc"
	I0925 10:52:22.735769   97187 command_runner.go:130] > runtime_config_path = ""
	I0925 10:52:22.735773   97187 command_runner.go:130] > monitor_path = ""
	I0925 10:52:22.735780   97187 command_runner.go:130] > monitor_cgroup = ""
	I0925 10:52:22.735784   97187 command_runner.go:130] > monitor_exec_cgroup = ""
	I0925 10:52:22.735807   97187 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0925 10:52:22.735813   97187 command_runner.go:130] > # running containers
	I0925 10:52:22.735818   97187 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0925 10:52:22.735826   97187 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0925 10:52:22.735835   97187 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0925 10:52:22.735843   97187 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0925 10:52:22.735850   97187 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0925 10:52:22.735857   97187 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0925 10:52:22.735862   97187 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0925 10:52:22.735869   97187 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0925 10:52:22.735874   97187 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0925 10:52:22.735881   97187 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0925 10:52:22.735887   97187 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0925 10:52:22.735895   97187 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0925 10:52:22.735902   97187 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0925 10:52:22.735911   97187 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0925 10:52:22.735920   97187 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0925 10:52:22.735928   97187 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0925 10:52:22.735937   97187 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0925 10:52:22.735947   97187 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0925 10:52:22.735955   97187 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0925 10:52:22.735964   97187 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0925 10:52:22.735971   97187 command_runner.go:130] > # Example:
	I0925 10:52:22.735976   97187 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0925 10:52:22.735983   97187 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0925 10:52:22.735988   97187 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0925 10:52:22.735996   97187 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0925 10:52:22.736001   97187 command_runner.go:130] > # cpuset = 0
	I0925 10:52:22.736005   97187 command_runner.go:130] > # cpushares = "0-1"
	I0925 10:52:22.736011   97187 command_runner.go:130] > # Where:
	I0925 10:52:22.736016   97187 command_runner.go:130] > # The workload name is workload-type.
	I0925 10:52:22.736025   97187 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0925 10:52:22.736035   97187 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0925 10:52:22.736044   97187 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0925 10:52:22.736051   97187 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0925 10:52:22.736060   97187 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0925 10:52:22.736066   97187 command_runner.go:130] > # 
	I0925 10:52:22.736073   97187 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0925 10:52:22.736078   97187 command_runner.go:130] > #
	I0925 10:52:22.736084   97187 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0925 10:52:22.736093   97187 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0925 10:52:22.736101   97187 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0925 10:52:22.736109   97187 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0925 10:52:22.736117   97187 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0925 10:52:22.736123   97187 command_runner.go:130] > [crio.image]
	I0925 10:52:22.736129   97187 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0925 10:52:22.736136   97187 command_runner.go:130] > # default_transport = "docker://"
	I0925 10:52:22.736142   97187 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0925 10:52:22.736150   97187 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0925 10:52:22.736157   97187 command_runner.go:130] > # global_auth_file = ""
	I0925 10:52:22.736162   97187 command_runner.go:130] > # The image used to instantiate infra containers.
	I0925 10:52:22.736170   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:52:22.736177   97187 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0925 10:52:22.736183   97187 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0925 10:52:22.736191   97187 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0925 10:52:22.736197   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:52:22.736203   97187 command_runner.go:130] > # pause_image_auth_file = ""
	I0925 10:52:22.736209   97187 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0925 10:52:22.736219   97187 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0925 10:52:22.736227   97187 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0925 10:52:22.736233   97187 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0925 10:52:22.736240   97187 command_runner.go:130] > # pause_command = "/pause"
	I0925 10:52:22.736246   97187 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0925 10:52:22.736254   97187 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0925 10:52:22.736263   97187 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0925 10:52:22.736271   97187 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0925 10:52:22.736279   97187 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0925 10:52:22.736287   97187 command_runner.go:130] > # signature_policy = ""
	I0925 10:52:22.736299   97187 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0925 10:52:22.736308   97187 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0925 10:52:22.736315   97187 command_runner.go:130] > # changing them here.
	I0925 10:52:22.736319   97187 command_runner.go:130] > # insecure_registries = [
	I0925 10:52:22.736325   97187 command_runner.go:130] > # ]
	I0925 10:52:22.736331   97187 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0925 10:52:22.736339   97187 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0925 10:52:22.736345   97187 command_runner.go:130] > # image_volumes = "mkdir"
	I0925 10:52:22.736351   97187 command_runner.go:130] > # Temporary directory to use for storing big files
	I0925 10:52:22.736358   97187 command_runner.go:130] > # big_files_temporary_dir = ""
	I0925 10:52:22.736364   97187 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0925 10:52:22.736370   97187 command_runner.go:130] > # CNI plugins.
	I0925 10:52:22.736374   97187 command_runner.go:130] > [crio.network]
	I0925 10:52:22.736383   97187 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0925 10:52:22.736390   97187 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0925 10:52:22.736397   97187 command_runner.go:130] > # cni_default_network = ""
	I0925 10:52:22.736403   97187 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0925 10:52:22.736410   97187 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0925 10:52:22.736415   97187 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0925 10:52:22.736421   97187 command_runner.go:130] > # plugin_dirs = [
	I0925 10:52:22.736426   97187 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0925 10:52:22.736431   97187 command_runner.go:130] > # ]
	I0925 10:52:22.736437   97187 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0925 10:52:22.736443   97187 command_runner.go:130] > [crio.metrics]
	I0925 10:52:22.736449   97187 command_runner.go:130] > # Globally enable or disable metrics support.
	I0925 10:52:22.736455   97187 command_runner.go:130] > # enable_metrics = false
	I0925 10:52:22.736460   97187 command_runner.go:130] > # Specify enabled metrics collectors.
	I0925 10:52:22.736467   97187 command_runner.go:130] > # Per default all metrics are enabled.
	I0925 10:52:22.736473   97187 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0925 10:52:22.736482   97187 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0925 10:52:22.736490   97187 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0925 10:52:22.736496   97187 command_runner.go:130] > # metrics_collectors = [
	I0925 10:52:22.736500   97187 command_runner.go:130] > # 	"operations",
	I0925 10:52:22.736507   97187 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0925 10:52:22.736511   97187 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0925 10:52:22.736518   97187 command_runner.go:130] > # 	"operations_errors",
	I0925 10:52:22.736522   97187 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0925 10:52:22.736529   97187 command_runner.go:130] > # 	"image_pulls_by_name",
	I0925 10:52:22.736534   97187 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0925 10:52:22.736540   97187 command_runner.go:130] > # 	"image_pulls_failures",
	I0925 10:52:22.736544   97187 command_runner.go:130] > # 	"image_pulls_successes",
	I0925 10:52:22.736552   97187 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0925 10:52:22.736556   97187 command_runner.go:130] > # 	"image_layer_reuse",
	I0925 10:52:22.736562   97187 command_runner.go:130] > # 	"containers_oom_total",
	I0925 10:52:22.736566   97187 command_runner.go:130] > # 	"containers_oom",
	I0925 10:52:22.736573   97187 command_runner.go:130] > # 	"processes_defunct",
	I0925 10:52:22.736577   97187 command_runner.go:130] > # 	"operations_total",
	I0925 10:52:22.736583   97187 command_runner.go:130] > # 	"operations_latency_seconds",
	I0925 10:52:22.736588   97187 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0925 10:52:22.736595   97187 command_runner.go:130] > # 	"operations_errors_total",
	I0925 10:52:22.736599   97187 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0925 10:52:22.736606   97187 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0925 10:52:22.736610   97187 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0925 10:52:22.736617   97187 command_runner.go:130] > # 	"image_pulls_success_total",
	I0925 10:52:22.736622   97187 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0925 10:52:22.736628   97187 command_runner.go:130] > # 	"containers_oom_count_total",
	I0925 10:52:22.736648   97187 command_runner.go:130] > # ]
	I0925 10:52:22.736657   97187 command_runner.go:130] > # The port on which the metrics server will listen.
	I0925 10:52:22.736666   97187 command_runner.go:130] > # metrics_port = 9090
	I0925 10:52:22.736671   97187 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0925 10:52:22.736675   97187 command_runner.go:130] > # metrics_socket = ""
	I0925 10:52:22.736681   97187 command_runner.go:130] > # The certificate for the secure metrics server.
	I0925 10:52:22.736688   97187 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0925 10:52:22.736695   97187 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0925 10:52:22.736700   97187 command_runner.go:130] > # certificate on any modification event.
	I0925 10:52:22.736707   97187 command_runner.go:130] > # metrics_cert = ""
	I0925 10:52:22.736712   97187 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0925 10:52:22.736718   97187 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0925 10:52:22.736722   97187 command_runner.go:130] > # metrics_key = ""
	I0925 10:52:22.736729   97187 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0925 10:52:22.736733   97187 command_runner.go:130] > [crio.tracing]
	I0925 10:52:22.736741   97187 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0925 10:52:22.736747   97187 command_runner.go:130] > # enable_tracing = false
	I0925 10:52:22.736753   97187 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0925 10:52:22.736759   97187 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0925 10:52:22.736765   97187 command_runner.go:130] > # Number of samples to collect per million spans.
	I0925 10:52:22.736771   97187 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0925 10:52:22.736777   97187 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0925 10:52:22.736784   97187 command_runner.go:130] > [crio.stats]
	I0925 10:52:22.736790   97187 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0925 10:52:22.736798   97187 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0925 10:52:22.736805   97187 command_runner.go:130] > # stats_collection_period = 0
	I0925 10:52:22.736871   97187 cni.go:84] Creating CNI manager for ""
	I0925 10:52:22.736882   97187 cni.go:136] 1 nodes found, recommending kindnet
	I0925 10:52:22.736899   97187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 10:52:22.736917   97187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-529126 NodeName:multinode-529126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 10:52:22.737036   97187 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-529126"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 10:52:22.737098   97187 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-529126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-529126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 10:52:22.737143   97187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 10:52:22.744661   97187 command_runner.go:130] > kubeadm
	I0925 10:52:22.744681   97187 command_runner.go:130] > kubectl
	I0925 10:52:22.744686   97187 command_runner.go:130] > kubelet
	I0925 10:52:22.745301   97187 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 10:52:22.745355   97187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 10:52:22.752751   97187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0925 10:52:22.767438   97187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 10:52:22.782986   97187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0925 10:52:22.798320   97187 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0925 10:52:22.801462   97187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 10:52:22.810698   97187 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126 for IP: 192.168.58.2
	I0925 10:52:22.810732   97187 certs.go:190] acquiring lock for shared ca certs: {Name:mk1dc4321044392bda6d0b04ee5f4e5cca314d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:22.810877   97187 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key
	I0925 10:52:22.810930   97187 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key
	I0925 10:52:22.810985   97187 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.key
	I0925 10:52:22.811002   97187 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.crt with IP's: []
	I0925 10:52:23.206555   97187 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.crt ...
	I0925 10:52:23.206587   97187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.crt: {Name:mk6b5dbe2d0559fdfcc6590bed11b23ae9142cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:23.206757   97187 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.key ...
	I0925 10:52:23.206768   97187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.key: {Name:mke06664248a19dc90eab9371856a38df838d1ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:23.206845   97187 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.key.cee25041
	I0925 10:52:23.206859   97187 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0925 10:52:23.469432   97187 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.crt.cee25041 ...
	I0925 10:52:23.469464   97187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.crt.cee25041: {Name:mkdb5512b9229db6f74f361c5e98b7882f0ee5d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:23.469630   97187 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.key.cee25041 ...
	I0925 10:52:23.469641   97187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.key.cee25041: {Name:mk59779f86eb2021920cb5f703066154f2aaca45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:23.469708   97187 certs.go:337] copying /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.crt
	I0925 10:52:23.469771   97187 certs.go:341] copying /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.key
	I0925 10:52:23.469818   97187 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.key
	I0925 10:52:23.469833   97187 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.crt with IP's: []
	I0925 10:52:23.651705   97187 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.crt ...
	I0925 10:52:23.651737   97187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.crt: {Name:mk75f0784776b47a5516950f0d124587946dba2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:23.651889   97187 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.key ...
	I0925 10:52:23.651900   97187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.key: {Name:mkf46c520701c42c4fe87480bd834e8f5e0bfe29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:23.651968   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0925 10:52:23.651984   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0925 10:52:23.651995   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0925 10:52:23.652004   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0925 10:52:23.652018   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0925 10:52:23.652030   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0925 10:52:23.652047   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0925 10:52:23.652061   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0925 10:52:23.652115   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem (1338 bytes)
	W0925 10:52:23.652147   97187 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516_empty.pem, impossibly tiny 0 bytes
	I0925 10:52:23.652159   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 10:52:23.652182   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem (1078 bytes)
	I0925 10:52:23.652203   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem (1123 bytes)
	I0925 10:52:23.652225   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem (1675 bytes)
	I0925 10:52:23.652263   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem (1708 bytes)
	I0925 10:52:23.652285   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem -> /usr/share/ca-certificates/12516.pem
	I0925 10:52:23.652299   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> /usr/share/ca-certificates/125162.pem
	I0925 10:52:23.652311   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:52:23.652868   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 10:52:23.674669   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0925 10:52:23.694356   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 10:52:23.716311   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 10:52:23.735780   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 10:52:23.755625   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 10:52:23.775419   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 10:52:23.794935   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 10:52:23.814395   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem --> /usr/share/ca-certificates/12516.pem (1338 bytes)
	I0925 10:52:23.834070   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem --> /usr/share/ca-certificates/125162.pem (1708 bytes)
	I0925 10:52:23.853399   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 10:52:23.873611   97187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 10:52:23.888822   97187 ssh_runner.go:195] Run: openssl version
	I0925 10:52:23.893285   97187 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0925 10:52:23.893492   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 10:52:23.901359   97187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:52:23.904166   97187 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:52:23.904188   97187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:52:23.904216   97187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:52:23.909824   97187 command_runner.go:130] > b5213941
	I0925 10:52:23.909990   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 10:52:23.918205   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12516.pem && ln -fs /usr/share/ca-certificates/12516.pem /etc/ssl/certs/12516.pem"
	I0925 10:52:23.926160   97187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12516.pem
	I0925 10:52:23.929147   97187 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 25 10:39 /usr/share/ca-certificates/12516.pem
	I0925 10:52:23.929167   97187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:39 /usr/share/ca-certificates/12516.pem
	I0925 10:52:23.929195   97187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12516.pem
	I0925 10:52:23.934825   97187 command_runner.go:130] > 51391683
	I0925 10:52:23.934856   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12516.pem /etc/ssl/certs/51391683.0"
	I0925 10:52:23.942549   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125162.pem && ln -fs /usr/share/ca-certificates/125162.pem /etc/ssl/certs/125162.pem"
	I0925 10:52:23.950559   97187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125162.pem
	I0925 10:52:23.953537   97187 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 25 10:39 /usr/share/ca-certificates/125162.pem
	I0925 10:52:23.953573   97187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:39 /usr/share/ca-certificates/125162.pem
	I0925 10:52:23.953608   97187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125162.pem
	I0925 10:52:23.959475   97187 command_runner.go:130] > 3ec20f2e
	I0925 10:52:23.959665   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125162.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 10:52:23.967667   97187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 10:52:23.970628   97187 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 10:52:23.970674   97187 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 10:52:23.970718   97187 kubeadm.go:404] StartCluster: {Name:multinode-529126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-529126 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:52:23.970781   97187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0925 10:52:23.970811   97187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0925 10:52:24.003175   97187 cri.go:89] found id: ""
	I0925 10:52:24.003233   97187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 10:52:24.010938   97187 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0925 10:52:24.010966   97187 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0925 10:52:24.010977   97187 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0925 10:52:24.011046   97187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 10:52:24.018346   97187 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0925 10:52:24.018396   97187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 10:52:24.025278   97187 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0925 10:52:24.025296   97187 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0925 10:52:24.025307   97187 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0925 10:52:24.025321   97187 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 10:52:24.025349   97187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 10:52:24.025376   97187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0925 10:52:24.066874   97187 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0925 10:52:24.066904   97187 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I0925 10:52:24.066946   97187 kubeadm.go:322] [preflight] Running pre-flight checks
	I0925 10:52:24.066957   97187 command_runner.go:130] > [preflight] Running pre-flight checks
	I0925 10:52:24.101153   97187 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0925 10:52:24.101180   97187 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0925 10:52:24.101272   97187 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1042-gcp
	I0925 10:52:24.101301   97187 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1042-gcp
	I0925 10:52:24.101369   97187 kubeadm.go:322] OS: Linux
	I0925 10:52:24.101389   97187 command_runner.go:130] > OS: Linux
	I0925 10:52:24.101453   97187 kubeadm.go:322] CGROUPS_CPU: enabled
	I0925 10:52:24.101470   97187 command_runner.go:130] > CGROUPS_CPU: enabled
	I0925 10:52:24.101538   97187 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0925 10:52:24.101549   97187 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0925 10:52:24.101611   97187 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0925 10:52:24.101621   97187 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0925 10:52:24.101683   97187 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0925 10:52:24.101693   97187 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0925 10:52:24.101767   97187 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0925 10:52:24.101777   97187 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0925 10:52:24.101836   97187 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0925 10:52:24.101846   97187 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0925 10:52:24.101879   97187 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0925 10:52:24.101889   97187 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0925 10:52:24.101955   97187 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0925 10:52:24.101976   97187 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0925 10:52:24.102045   97187 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0925 10:52:24.102057   97187 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0925 10:52:24.162274   97187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 10:52:24.162305   97187 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0925 10:52:24.162395   97187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 10:52:24.162404   97187 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0925 10:52:24.162501   97187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 10:52:24.162524   97187 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0925 10:52:24.349060   97187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 10:52:24.351468   97187 out.go:204]   - Generating certificates and keys ...
	I0925 10:52:24.349122   97187 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0925 10:52:24.351616   97187 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0925 10:52:24.351633   97187 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0925 10:52:24.351728   97187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0925 10:52:24.351738   97187 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0925 10:52:24.585025   97187 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 10:52:24.585056   97187 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0925 10:52:24.736510   97187 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0925 10:52:24.736547   97187 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0925 10:52:24.805366   97187 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0925 10:52:24.805395   97187 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0925 10:52:25.085838   97187 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0925 10:52:25.085867   97187 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0925 10:52:25.168713   97187 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0925 10:52:25.168740   97187 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0925 10:52:25.168877   97187 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-529126] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0925 10:52:25.168907   97187 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-529126] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0925 10:52:25.354761   97187 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0925 10:52:25.354792   97187 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0925 10:52:25.354943   97187 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-529126] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0925 10:52:25.354953   97187 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-529126] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0925 10:52:25.457836   97187 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 10:52:25.457889   97187 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0925 10:52:25.638868   97187 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 10:52:25.638896   97187 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0925 10:52:25.734620   97187 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0925 10:52:25.734648   97187 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0925 10:52:25.734736   97187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 10:52:25.734746   97187 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0925 10:52:26.180604   97187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 10:52:26.180660   97187 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0925 10:52:26.314793   97187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 10:52:26.314821   97187 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0925 10:52:26.402075   97187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 10:52:26.402091   97187 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0925 10:52:26.501494   97187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 10:52:26.501525   97187 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0925 10:52:26.502041   97187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 10:52:26.502065   97187 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0925 10:52:26.504170   97187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 10:52:26.506607   97187 out.go:204]   - Booting up control plane ...
	I0925 10:52:26.504208   97187 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0925 10:52:26.506713   97187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 10:52:26.506731   97187 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0925 10:52:26.506862   97187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 10:52:26.506884   97187 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0925 10:52:26.507394   97187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 10:52:26.507412   97187 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0925 10:52:26.515498   97187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 10:52:26.515533   97187 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 10:52:26.516229   97187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 10:52:26.516246   97187 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 10:52:26.516299   97187 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0925 10:52:26.516318   97187 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0925 10:52:26.594667   97187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 10:52:26.594695   97187 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0925 10:52:31.596549   97187 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001904 seconds
	I0925 10:52:31.596579   97187 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.001904 seconds
	I0925 10:52:31.596748   97187 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 10:52:31.596759   97187 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0925 10:52:31.607978   97187 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 10:52:31.608016   97187 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0925 10:52:32.127492   97187 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0925 10:52:32.127507   97187 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0925 10:52:32.127846   97187 kubeadm.go:322] [mark-control-plane] Marking the node multinode-529126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 10:52:32.127885   97187 command_runner.go:130] > [mark-control-plane] Marking the node multinode-529126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0925 10:52:32.636243   97187 kubeadm.go:322] [bootstrap-token] Using token: fpthnz.1jaht0hrz5o8o5bu
	I0925 10:52:32.637848   97187 out.go:204]   - Configuring RBAC rules ...
	I0925 10:52:32.636341   97187 command_runner.go:130] > [bootstrap-token] Using token: fpthnz.1jaht0hrz5o8o5bu
	I0925 10:52:32.638029   97187 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 10:52:32.638051   97187 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0925 10:52:32.641453   97187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 10:52:32.641460   97187 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0925 10:52:32.648273   97187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 10:52:32.648292   97187 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0925 10:52:32.650788   97187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 10:52:32.650806   97187 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0925 10:52:32.653290   97187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 10:52:32.653308   97187 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0925 10:52:32.655761   97187 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 10:52:32.655779   97187 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0925 10:52:32.665275   97187 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 10:52:32.665305   97187 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0925 10:52:32.889242   97187 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0925 10:52:32.889267   97187 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0925 10:52:33.049473   97187 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0925 10:52:33.049499   97187 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0925 10:52:33.050733   97187 kubeadm.go:322] 
	I0925 10:52:33.050828   97187 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0925 10:52:33.050843   97187 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0925 10:52:33.050849   97187 kubeadm.go:322] 
	I0925 10:52:33.050946   97187 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0925 10:52:33.050958   97187 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0925 10:52:33.050964   97187 kubeadm.go:322] 
	I0925 10:52:33.050999   97187 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0925 10:52:33.051009   97187 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0925 10:52:33.051085   97187 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 10:52:33.051096   97187 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0925 10:52:33.051161   97187 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 10:52:33.051177   97187 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0925 10:52:33.051182   97187 kubeadm.go:322] 
	I0925 10:52:33.051251   97187 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0925 10:52:33.051264   97187 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0925 10:52:33.051269   97187 kubeadm.go:322] 
	I0925 10:52:33.051331   97187 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 10:52:33.051342   97187 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0925 10:52:33.051348   97187 kubeadm.go:322] 
	I0925 10:52:33.051402   97187 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0925 10:52:33.051413   97187 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0925 10:52:33.051507   97187 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 10:52:33.051521   97187 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0925 10:52:33.051609   97187 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 10:52:33.051616   97187 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0925 10:52:33.051621   97187 kubeadm.go:322] 
	I0925 10:52:33.051695   97187 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0925 10:52:33.051700   97187 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0925 10:52:33.051759   97187 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0925 10:52:33.051763   97187 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0925 10:52:33.051765   97187 kubeadm.go:322] 
	I0925 10:52:33.051831   97187 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fpthnz.1jaht0hrz5o8o5bu \
	I0925 10:52:33.051835   97187 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token fpthnz.1jaht0hrz5o8o5bu \
	I0925 10:52:33.051915   97187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 \
	I0925 10:52:33.051919   97187 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 \
	I0925 10:52:33.051935   97187 kubeadm.go:322] 	--control-plane 
	I0925 10:52:33.051938   97187 command_runner.go:130] > 	--control-plane 
	I0925 10:52:33.051941   97187 kubeadm.go:322] 
	I0925 10:52:33.052007   97187 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0925 10:52:33.052011   97187 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0925 10:52:33.052014   97187 kubeadm.go:322] 
	I0925 10:52:33.052078   97187 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fpthnz.1jaht0hrz5o8o5bu \
	I0925 10:52:33.052082   97187 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fpthnz.1jaht0hrz5o8o5bu \
	I0925 10:52:33.052160   97187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 
	I0925 10:52:33.052164   97187 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 
	I0925 10:52:33.054719   97187 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1042-gcp\n", err: exit status 1
	I0925 10:52:33.054744   97187 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1042-gcp\n", err: exit status 1
	I0925 10:52:33.054877   97187 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 10:52:33.054893   97187 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 10:52:33.054911   97187 cni.go:84] Creating CNI manager for ""
	I0925 10:52:33.054933   97187 cni.go:136] 1 nodes found, recommending kindnet
	I0925 10:52:33.056787   97187 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0925 10:52:33.058111   97187 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0925 10:52:33.062358   97187 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0925 10:52:33.062385   97187 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0925 10:52:33.062397   97187 command_runner.go:130] > Device: 37h/55d	Inode: 544061      Links: 1
	I0925 10:52:33.062407   97187 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0925 10:52:33.062421   97187 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0925 10:52:33.062430   97187 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0925 10:52:33.062443   97187 command_runner.go:130] > Change: 2023-09-25 10:33:47.107124260 +0000
	I0925 10:52:33.062456   97187 command_runner.go:130] >  Birth: 2023-09-25 10:33:47.087122342 +0000
	I0925 10:52:33.062524   97187 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0925 10:52:33.062539   97187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0925 10:52:33.080704   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0925 10:52:33.799721   97187 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0925 10:52:33.804136   97187 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0925 10:52:33.809954   97187 command_runner.go:130] > serviceaccount/kindnet created
	I0925 10:52:33.818173   97187 command_runner.go:130] > daemonset.apps/kindnet created
	I0925 10:52:33.822485   97187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 10:52:33.822611   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:33.822615   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c minikube.k8s.io/name=multinode-529126 minikube.k8s.io/updated_at=2023_09_25T10_52_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:33.829063   97187 command_runner.go:130] > -16
	I0925 10:52:33.829094   97187 ops.go:34] apiserver oom_adj: -16
	I0925 10:52:33.883677   97187 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0925 10:52:33.887814   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:33.896340   97187 command_runner.go:130] > node/multinode-529126 labeled
	I0925 10:52:33.970025   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:33.972976   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:34.082199   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:34.582989   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:34.643795   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:35.082354   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:35.144727   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:35.583332   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:35.643382   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:36.083235   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:36.142556   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:36.583084   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:36.644994   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:37.082572   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:37.145625   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:37.583256   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:37.643338   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:38.082578   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:38.144460   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:38.583132   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:38.643126   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:39.083155   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:39.141695   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:39.582737   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:39.643963   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:40.082493   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:40.142196   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:40.582345   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:40.644596   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:41.083346   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:41.144063   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:41.582368   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:41.641273   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:42.083147   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:42.144298   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:42.583409   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:42.647397   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:43.083071   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:43.144713   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:43.582719   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:43.642840   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:44.083068   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:44.144707   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:44.583358   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:44.650070   97187 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0925 10:52:45.082636   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0925 10:52:45.147965   97187 command_runner.go:130] > NAME      SECRETS   AGE
	I0925 10:52:45.147986   97187 command_runner.go:130] > default   0         0s
	I0925 10:52:45.148013   97187 kubeadm.go:1081] duration metric: took 11.325456318s to wait for elevateKubeSystemPrivileges.
	I0925 10:52:45.148036   97187 kubeadm.go:406] StartCluster complete in 21.177320908s
	I0925 10:52:45.148060   97187 settings.go:142] acquiring lock: {Name:mk1ac20708e0ba811b0d8618989be560267b849d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:45.148131   97187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:52:45.149101   97187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17297-5744/kubeconfig: {Name:mkcd9251a91cb443db17b5c9d69f4674dad74ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:52:45.149354   97187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 10:52:45.149484   97187 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 10:52:45.149566   97187 config.go:182] Loaded profile config "multinode-529126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:52:45.149573   97187 addons.go:69] Setting storage-provisioner=true in profile "multinode-529126"
	I0925 10:52:45.149589   97187 addons.go:69] Setting default-storageclass=true in profile "multinode-529126"
	I0925 10:52:45.149595   97187 addons.go:231] Setting addon storage-provisioner=true in "multinode-529126"
	I0925 10:52:45.149604   97187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-529126"
	I0925 10:52:45.149644   97187 host.go:66] Checking if "multinode-529126" exists ...
	I0925 10:52:45.149721   97187 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:52:45.149969   97187 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Status}}
	I0925 10:52:45.150012   97187 kapi.go:59] client config for multinode-529126: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.crt", KeyFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.key", CAFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf6d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 10:52:45.150129   97187 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Status}}
	I0925 10:52:45.150834   97187 cert_rotation.go:137] Starting client certificate rotation controller
	I0925 10:52:45.151144   97187 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0925 10:52:45.151167   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:45.151179   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:45.151189   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:45.161693   97187 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0925 10:52:45.161727   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:45.161738   97187 round_trippers.go:580]     Content-Length: 291
	I0925 10:52:45.161747   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:45 GMT
	I0925 10:52:45.161755   97187 round_trippers.go:580]     Audit-Id: 48f7234c-5f2d-483d-aa3b-5303a10c5609
	I0925 10:52:45.161763   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:45.161772   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:45.161781   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:45.161792   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:45.161833   97187 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3fc7d046-e1e7-4b20-9a74-1e7aa1ebad8e","resourceVersion":"268","creationTimestamp":"2023-09-25T10:52:32Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0925 10:52:45.162348   97187 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3fc7d046-e1e7-4b20-9a74-1e7aa1ebad8e","resourceVersion":"268","creationTimestamp":"2023-09-25T10:52:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0925 10:52:45.162446   97187 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0925 10:52:45.162465   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:45.162476   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:45.162486   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:45.162495   97187 round_trippers.go:473]     Content-Type: application/json
	I0925 10:52:45.169325   97187 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0925 10:52:45.169348   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:45.169357   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:45.169365   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:45.169372   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:45.169383   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:45.169391   97187 round_trippers.go:580]     Content-Length: 291
	I0925 10:52:45.169401   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:45 GMT
	I0925 10:52:45.169409   97187 round_trippers.go:580]     Audit-Id: dc207c42-54d6-4f59-9527-505a5f86e668
	I0925 10:52:45.169435   97187 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3fc7d046-e1e7-4b20-9a74-1e7aa1ebad8e","resourceVersion":"336","creationTimestamp":"2023-09-25T10:52:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0925 10:52:45.169592   97187 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0925 10:52:45.169603   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:45.169610   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:45.169616   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:45.171659   97187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0925 10:52:45.171705   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:45.173037   97187 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 10:52:45.173045   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:45.173049   97187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 10:52:45.173055   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:45.173062   97187 round_trippers.go:580]     Content-Length: 291
	I0925 10:52:45.173067   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:45 GMT
	I0925 10:52:45.173074   97187 round_trippers.go:580]     Audit-Id: 9ecdec8c-1b92-4608-818c-1c9a0461d3ec
	I0925 10:52:45.173079   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:45.173084   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:45.173090   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:45.173101   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:45.173109   97187 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3fc7d046-e1e7-4b20-9a74-1e7aa1ebad8e","resourceVersion":"336","creationTimestamp":"2023-09-25T10:52:32Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0925 10:52:45.170358   97187 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:52:45.173184   97187 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-529126" context rescaled to 1 replicas
	I0925 10:52:45.173209   97187 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0925 10:52:45.174563   97187 out.go:177] * Verifying Kubernetes components...
	I0925 10:52:45.173464   97187 kapi.go:59] client config for multinode-529126: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.crt", KeyFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.key", CAFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf6d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 10:52:45.175157   97187 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0925 10:52:45.175171   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:45.175184   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:45.175194   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:45.176785   97187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:52:45.178222   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:45.178240   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:45.178251   97187 round_trippers.go:580]     Audit-Id: 0cdca37f-4208-4efb-a83c-bc6d214e2a77
	I0925 10:52:45.178260   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:45.178272   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:45.178283   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:45.178292   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:45.178303   97187 round_trippers.go:580]     Content-Length: 109
	I0925 10:52:45.178314   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:45 GMT
	I0925 10:52:45.178336   97187 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"336"},"items":[]}
	I0925 10:52:45.178605   97187 addons.go:231] Setting addon default-storageclass=true in "multinode-529126"
	I0925 10:52:45.178643   97187 host.go:66] Checking if "multinode-529126" exists ...
	I0925 10:52:45.179130   97187 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Status}}
	I0925 10:52:45.193422   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:52:45.201756   97187 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 10:52:45.201784   97187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 10:52:45.201843   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:52:45.224106   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:52:45.227834   97187 command_runner.go:130] > apiVersion: v1
	I0925 10:52:45.227857   97187 command_runner.go:130] > data:
	I0925 10:52:45.227864   97187 command_runner.go:130] >   Corefile: |
	I0925 10:52:45.227870   97187 command_runner.go:130] >     .:53 {
	I0925 10:52:45.227877   97187 command_runner.go:130] >         errors
	I0925 10:52:45.227885   97187 command_runner.go:130] >         health {
	I0925 10:52:45.227893   97187 command_runner.go:130] >            lameduck 5s
	I0925 10:52:45.227900   97187 command_runner.go:130] >         }
	I0925 10:52:45.227906   97187 command_runner.go:130] >         ready
	I0925 10:52:45.227916   97187 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0925 10:52:45.227926   97187 command_runner.go:130] >            pods insecure
	I0925 10:52:45.227938   97187 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0925 10:52:45.227948   97187 command_runner.go:130] >            ttl 30
	I0925 10:52:45.227958   97187 command_runner.go:130] >         }
	I0925 10:52:45.227968   97187 command_runner.go:130] >         prometheus :9153
	I0925 10:52:45.227982   97187 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0925 10:52:45.227992   97187 command_runner.go:130] >            max_concurrent 1000
	I0925 10:52:45.227999   97187 command_runner.go:130] >         }
	I0925 10:52:45.228009   97187 command_runner.go:130] >         cache 30
	I0925 10:52:45.228019   97187 command_runner.go:130] >         loop
	I0925 10:52:45.228028   97187 command_runner.go:130] >         reload
	I0925 10:52:45.228038   97187 command_runner.go:130] >         loadbalance
	I0925 10:52:45.228046   97187 command_runner.go:130] >     }
	I0925 10:52:45.228056   97187 command_runner.go:130] > kind: ConfigMap
	I0925 10:52:45.228065   97187 command_runner.go:130] > metadata:
	I0925 10:52:45.228078   97187 command_runner.go:130] >   creationTimestamp: "2023-09-25T10:52:32Z"
	I0925 10:52:45.228087   97187 command_runner.go:130] >   name: coredns
	I0925 10:52:45.228094   97187 command_runner.go:130] >   namespace: kube-system
	I0925 10:52:45.228104   97187 command_runner.go:130] >   resourceVersion: "264"
	I0925 10:52:45.228122   97187 command_runner.go:130] >   uid: 125dcbe0-f53b-481b-ab0f-f3253a7ee77e
	I0925 10:52:45.228320   97187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0925 10:52:45.228603   97187 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:52:45.228936   97187 kapi.go:59] client config for multinode-529126: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.crt", KeyFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.key", CAFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf6d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 10:52:45.229266   97187 node_ready.go:35] waiting up to 6m0s for node "multinode-529126" to be "Ready" ...
	I0925 10:52:45.229356   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:45.229368   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:45.229380   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:45.229393   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:45.231526   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:45.231560   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:45.231570   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:45.231578   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:45.231585   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:45.231594   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:45.231607   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:45 GMT
	I0925 10:52:45.231619   97187 round_trippers.go:580]     Audit-Id: 94a2ebc1-1193-4795-bf6c-799a515f7a3e
	I0925 10:52:45.231737   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"331","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:5
2:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I0925 10:52:45.232272   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:45.232286   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:45.232293   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:45.232298   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:45.234480   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:45.234498   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:45.234508   97187 round_trippers.go:580]     Audit-Id: 7e83cc62-47fb-413d-83eb-dba02b97b79e
	I0925 10:52:45.234515   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:45.234524   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:45.234531   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:45.234540   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:45.234553   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:45 GMT
	I0925 10:52:45.234662   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"331","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:5
2:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I0925 10:52:45.361068   97187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 10:52:45.362748   97187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 10:52:45.736129   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:45.736168   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:45.736183   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:45.736189   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:45.746320   97187 command_runner.go:130] > configmap/coredns replaced
	I0925 10:52:45.746412   97187 start.go:923] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0925 10:52:45.747305   97187 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0925 10:52:45.747333   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:45.747363   97187 round_trippers.go:580]     Audit-Id: 2d04550d-10c9-48b5-9af2-97e22fc8104e
	I0925 10:52:45.747377   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:45.747389   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:45.747399   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:45.747412   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:45.747435   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:45 GMT
	I0925 10:52:45.747572   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"331","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:5
2:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I0925 10:52:46.102586   97187 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0925 10:52:46.107423   97187 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0925 10:52:46.114693   97187 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0925 10:52:46.120411   97187 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0925 10:52:46.145011   97187 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0925 10:52:46.154197   97187 command_runner.go:130] > pod/storage-provisioner created
	I0925 10:52:46.159458   97187 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0925 10:52:46.160961   97187 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0925 10:52:46.162259   97187 addons.go:502] enable addons completed in 1.012772761s: enabled=[storage-provisioner default-storageclass]
	I0925 10:52:46.235718   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:46.235738   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:46.235745   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:46.235751   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:46.237969   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:46.237988   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:46.237994   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:46.238000   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:46 GMT
	I0925 10:52:46.238007   97187 round_trippers.go:580]     Audit-Id: 635ad074-aa90-4014-adc6-1449e7b6f40b
	I0925 10:52:46.238016   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:46.238023   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:46.238031   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:46.238245   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:46.735819   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:46.735841   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:46.735849   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:46.735859   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:46.738135   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:46.738156   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:46.738166   97187 round_trippers.go:580]     Audit-Id: 07615c63-dfad-4829-8327-9649ed9f15cf
	I0925 10:52:46.738175   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:46.738183   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:46.738190   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:46.738197   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:46.738204   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:46 GMT
	I0925 10:52:46.738316   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:47.236019   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:47.236046   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:47.236054   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:47.236063   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:47.238387   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:47.238410   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:47.238420   97187 round_trippers.go:580]     Audit-Id: 964442c1-f16a-41a1-97b1-89c1060913be
	I0925 10:52:47.238429   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:47.238437   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:47.238445   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:47.238453   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:47.238464   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:47 GMT
	I0925 10:52:47.238597   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:47.238921   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:52:47.735224   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:47.735245   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:47.735253   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:47.735259   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:47.737446   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:47.737469   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:47.737479   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:47.737487   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:47 GMT
	I0925 10:52:47.737495   97187 round_trippers.go:580]     Audit-Id: 3f0ac47f-b5ac-488f-9ee8-68374f652e15
	I0925 10:52:47.737504   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:47.737517   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:47.737529   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:47.737650   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:48.235247   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:48.235269   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:48.235277   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:48.235283   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:48.237547   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:48.237566   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:48.237575   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:48.237583   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:48.237590   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:48.237598   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:48.237606   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:48 GMT
	I0925 10:52:48.237617   97187 round_trippers.go:580]     Audit-Id: 0a1825ec-d49b-4d31-b516-3df042b9c605
	I0925 10:52:48.237757   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:48.735592   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:48.735613   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:48.735621   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:48.735627   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:48.737878   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:48.737897   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:48.737905   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:48 GMT
	I0925 10:52:48.737910   97187 round_trippers.go:580]     Audit-Id: edc9993d-8733-42f1-9d15-147c8799d3f9
	I0925 10:52:48.737916   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:48.737921   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:48.737927   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:48.737932   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:48.738048   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:49.235698   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:49.235719   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:49.235727   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:49.235733   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:49.237863   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:49.237882   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:49.237890   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:49.237897   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:49.237905   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:49.237911   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:49 GMT
	I0925 10:52:49.237919   97187 round_trippers.go:580]     Audit-Id: 630d8faf-e1cf-460b-87cb-037c0bba41e6
	I0925 10:52:49.237931   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:49.238061   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:49.735787   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:49.735810   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:49.735817   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:49.735823   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:49.738066   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:49.738092   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:49.738103   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:49 GMT
	I0925 10:52:49.738112   97187 round_trippers.go:580]     Audit-Id: 52f8ee6f-1dd3-4715-b2d0-ce86c2c10738
	I0925 10:52:49.738121   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:49.738131   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:49.738141   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:49.738154   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:49.738270   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:49.738614   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:52:50.235829   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:50.235849   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:50.235862   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:50.235868   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:50.237972   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:50.237990   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:50.237998   97187 round_trippers.go:580]     Audit-Id: 7f676dfa-ee2e-4275-8677-934d5298b70c
	I0925 10:52:50.238006   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:50.238015   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:50.238022   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:50.238035   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:50.238043   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:50 GMT
	I0925 10:52:50.238177   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:50.735772   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:50.735796   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:50.735804   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:50.735810   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:50.737941   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:50.737960   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:50.737966   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:50.737974   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:50.737982   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:50 GMT
	I0925 10:52:50.737990   97187 round_trippers.go:580]     Audit-Id: 094fdc23-a142-4468-a3f2-4ddfbebda3a7
	I0925 10:52:50.737997   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:50.738005   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:50.738102   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:51.235743   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:51.235766   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:51.235776   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:51.235785   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:51.237882   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:51.237902   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:51.237909   97187 round_trippers.go:580]     Audit-Id: 03fbb0cc-4eb9-45b9-afee-a60d9d86b319
	I0925 10:52:51.237914   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:51.237921   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:51.237929   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:51.237938   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:51.237945   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:51 GMT
	I0925 10:52:51.238088   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:51.735704   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:51.735727   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:51.735735   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:51.735741   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:51.737883   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:51.737905   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:51.737917   97187 round_trippers.go:580]     Audit-Id: d25cfecf-9750-4813-b0b5-4e3ceef43181
	I0925 10:52:51.737926   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:51.737935   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:51.737947   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:51.737958   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:51.737970   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:51 GMT
	I0925 10:52:51.738067   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:52.235727   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:52.235749   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:52.235757   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:52.235763   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:52.238036   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:52.238060   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:52.238071   97187 round_trippers.go:580]     Audit-Id: c6a82315-61dc-4670-9145-93f633ac0ee2
	I0925 10:52:52.238080   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:52.238088   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:52.238095   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:52.238107   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:52.238119   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:52 GMT
	I0925 10:52:52.238238   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:52.238550   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:52:52.735837   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:52.735858   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:52.735866   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:52.735872   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:52.738120   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:52.738141   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:52.738150   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:52 GMT
	I0925 10:52:52.738158   97187 round_trippers.go:580]     Audit-Id: 028fceb3-9eac-4037-a5e4-2a500a86c949
	I0925 10:52:52.738165   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:52.738173   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:52.738182   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:52.738194   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:52.738337   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:53.236040   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:53.236066   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:53.236082   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:53.236089   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:53.238350   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:53.238372   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:53.238382   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:53.238390   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:53.238399   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:53 GMT
	I0925 10:52:53.238408   97187 round_trippers.go:580]     Audit-Id: feff9e4e-1b2f-43bb-9d0c-965494ac5aa9
	I0925 10:52:53.238420   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:53.238430   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:53.238568   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:53.735412   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:53.735437   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:53.735447   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:53.735455   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:53.737757   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:53.737783   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:53.737793   97187 round_trippers.go:580]     Audit-Id: 4466a91b-d58e-4530-84b6-e3cc8b21f052
	I0925 10:52:53.737801   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:53.737809   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:53.737820   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:53.737833   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:53.737843   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:53 GMT
	I0925 10:52:53.737967   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:54.235515   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:54.235539   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:54.235549   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:54.235558   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:54.237618   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:54.237642   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:54.237655   97187 round_trippers.go:580]     Audit-Id: cce23930-a39a-4d93-a9cb-6bfa5ea549c1
	I0925 10:52:54.237664   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:54.237672   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:54.237681   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:54.237690   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:54.237700   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:54 GMT
	I0925 10:52:54.237895   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:54.735411   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:54.735433   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:54.735441   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:54.735447   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:54.737594   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:54.737615   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:54.737624   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:54 GMT
	I0925 10:52:54.737632   97187 round_trippers.go:580]     Audit-Id: d26ead9f-6656-4adf-958c-a31f2f035b92
	I0925 10:52:54.737641   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:54.737652   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:54.737662   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:54.737676   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:54.737781   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:54.738122   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:52:55.235101   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:55.235120   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:55.235128   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:55.235134   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:55.237394   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:55.237414   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:55.237423   97187 round_trippers.go:580]     Audit-Id: 8890afde-ea37-4969-8d66-0a036ef43a3c
	I0925 10:52:55.237431   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:55.237441   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:55.237450   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:55.237461   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:55.237474   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:55 GMT
	I0925 10:52:55.237592   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:55.735132   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:55.735151   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:55.735159   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:55.735164   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:55.737350   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:55.737366   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:55.737372   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:55 GMT
	I0925 10:52:55.737377   97187 round_trippers.go:580]     Audit-Id: d47c4e9d-528b-49fa-ab1f-e8cac91a62d8
	I0925 10:52:55.737382   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:55.737389   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:55.737395   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:55.737400   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:55.737497   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:56.235124   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:56.235145   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:56.235153   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:56.235159   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:56.237200   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:56.237220   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:56.237229   97187 round_trippers.go:580]     Audit-Id: 6caf78a1-b4a6-4d20-ae56-ad6aab9f665c
	I0925 10:52:56.237237   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:56.237245   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:56.237256   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:56.237266   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:56.237279   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:56 GMT
	I0925 10:52:56.237392   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:56.736059   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:56.736080   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:56.736092   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:56.736098   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:56.738267   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:56.738285   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:56.738292   97187 round_trippers.go:580]     Audit-Id: 7cf3c460-2a9f-450d-9972-ec7530d04fc6
	I0925 10:52:56.738298   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:56.738303   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:56.738308   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:56.738313   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:56.738318   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:56 GMT
	I0925 10:52:56.738477   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:56.738778   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:52:57.236107   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:57.236125   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:57.236133   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:57.236139   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:57.238224   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:57.238241   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:57.238247   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:57.238252   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:57.238258   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:57.238263   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:57 GMT
	I0925 10:52:57.238267   97187 round_trippers.go:580]     Audit-Id: 46d15fa6-68a6-43f6-9c99-90ebeea68cb6
	I0925 10:52:57.238274   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:57.238427   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:57.736085   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:57.736113   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:57.736125   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:57.736138   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:57.738273   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:57.738290   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:57.738296   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:57.738302   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:57.738309   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:57.738314   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:57.738319   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:57 GMT
	I0925 10:52:57.738325   97187 round_trippers.go:580]     Audit-Id: c06bbd19-1531-4243-9f1a-6522b71ea904
	I0925 10:52:57.738508   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:58.236156   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:58.236180   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:58.236188   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:58.236194   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:58.238390   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:58.238412   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:58.238419   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:58.238425   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:58.238430   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:58.238435   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:58 GMT
	I0925 10:52:58.238440   97187 round_trippers.go:580]     Audit-Id: 805ff5c3-69b3-4175-b4da-e4fe6b549df1
	I0925 10:52:58.238446   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:58.238604   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:58.735424   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:58.735448   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:58.735456   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:58.735461   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:58.737686   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:58.737706   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:58.737713   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:58.737718   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:58 GMT
	I0925 10:52:58.737727   97187 round_trippers.go:580]     Audit-Id: b8c3c75a-870e-4d23-9893-53cf7542212e
	I0925 10:52:58.737735   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:58.737742   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:58.737752   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:58.737882   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:59.235169   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:59.235192   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:59.235199   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:59.235206   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:59.237360   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:59.237386   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:59.237398   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:59.237408   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:59.237420   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:59.237432   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:59.237444   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:59 GMT
	I0925 10:52:59.237455   97187 round_trippers.go:580]     Audit-Id: c989ec5d-7981-48c0-8abb-2e1b856a1c54
	I0925 10:52:59.237574   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:52:59.237887   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:52:59.735091   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:52:59.735110   97187 round_trippers.go:469] Request Headers:
	I0925 10:52:59.735118   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:52:59.735125   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:52:59.737344   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:52:59.737368   97187 round_trippers.go:577] Response Headers:
	I0925 10:52:59.737378   97187 round_trippers.go:580]     Audit-Id: f7a55965-a6a9-4a40-bb90-2c3922d8137d
	I0925 10:52:59.737388   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:52:59.737400   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:52:59.737409   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:52:59.737423   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:52:59.737435   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:52:59 GMT
	I0925 10:52:59.737529   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:00.236149   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:00.236178   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:00.236191   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:00.236201   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:00.238430   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:00.238452   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:00.238459   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:00.238465   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:00.238470   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:00.238475   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:00 GMT
	I0925 10:53:00.238480   97187 round_trippers.go:580]     Audit-Id: ac312799-c8b7-4211-8790-4cfbe0c2a3a5
	I0925 10:53:00.238485   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:00.238602   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:00.735167   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:00.735192   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:00.735201   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:00.735207   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:00.737500   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:00.737522   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:00.737532   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:00.737540   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:00.737551   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:00 GMT
	I0925 10:53:00.737561   97187 round_trippers.go:580]     Audit-Id: c4693335-9e73-4a50-b67e-4da5d0cba6e0
	I0925 10:53:00.737573   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:00.737584   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:00.737689   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:01.235297   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:01.235326   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:01.235338   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:01.235348   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:01.237739   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:01.237759   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:01.237769   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:01.237779   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:01 GMT
	I0925 10:53:01.237787   97187 round_trippers.go:580]     Audit-Id: 19253f49-12e3-421b-968e-f869de79c384
	I0925 10:53:01.237798   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:01.237806   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:01.237813   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:01.237959   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:01.238288   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:53:01.735455   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:01.735486   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:01.735494   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:01.735500   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:01.737545   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:01.737562   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:01.737569   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:01.737574   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:01 GMT
	I0925 10:53:01.737580   97187 round_trippers.go:580]     Audit-Id: 7ef21940-603e-4def-978c-ed5a0d8494a3
	I0925 10:53:01.737585   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:01.737590   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:01.737597   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:01.737728   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:02.235319   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:02.235357   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:02.235366   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:02.235372   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:02.237608   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:02.237627   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:02.237635   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:02 GMT
	I0925 10:53:02.237640   97187 round_trippers.go:580]     Audit-Id: afdd3973-a6f8-455f-94b9-4318b1bb5f33
	I0925 10:53:02.237645   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:02.237650   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:02.237656   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:02.237661   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:02.237850   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:02.735470   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:02.735493   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:02.735501   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:02.735507   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:02.737830   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:02.737860   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:02.737871   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:02.737879   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:02 GMT
	I0925 10:53:02.737887   97187 round_trippers.go:580]     Audit-Id: 521485e9-0ab3-4392-a581-4ae6831f1540
	I0925 10:53:02.737894   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:02.737901   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:02.737908   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:02.738057   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:03.235705   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:03.235725   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:03.235733   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:03.235739   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:03.237999   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:03.238022   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:03.238031   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:03.238038   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:03 GMT
	I0925 10:53:03.238046   97187 round_trippers.go:580]     Audit-Id: 59a2eeaf-b6e5-4d90-b0f9-98b9eabdd870
	I0925 10:53:03.238054   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:03.238062   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:03.238072   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:03.238250   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:03.238574   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:53:03.735129   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:03.735150   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:03.735158   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:03.735164   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:03.737165   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:03.737183   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:03.737190   97187 round_trippers.go:580]     Audit-Id: 55265061-0e71-430b-8489-a1cdc6a66f3f
	I0925 10:53:03.737196   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:03.737205   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:03.737213   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:03.737225   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:03.737234   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:03 GMT
	I0925 10:53:03.737343   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:04.236039   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:04.236067   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:04.236075   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:04.236081   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:04.238308   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:04.238331   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:04.238341   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:04.238349   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:04.238357   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:04 GMT
	I0925 10:53:04.238365   97187 round_trippers.go:580]     Audit-Id: a3d008dd-c7bf-4e51-8f43-b6c8e4dfddda
	I0925 10:53:04.238375   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:04.238383   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:04.238544   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:04.735910   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:04.735932   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:04.735940   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:04.735947   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:04.738127   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:04.738152   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:04.738159   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:04.738165   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:04.738170   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:04.738175   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:04 GMT
	I0925 10:53:04.738180   97187 round_trippers.go:580]     Audit-Id: eb56e5ce-c24f-4d70-a07f-d4a3067f3633
	I0925 10:53:04.738188   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:04.738345   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:05.236145   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:05.236170   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:05.236182   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:05.236192   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:05.238339   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:05.238357   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:05.238363   97187 round_trippers.go:580]     Audit-Id: 8c136fbc-3609-4252-8141-4b3c7212d30f
	I0925 10:53:05.238369   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:05.238374   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:05.238379   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:05.238386   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:05.238391   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:05 GMT
	I0925 10:53:05.238511   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:05.238828   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:53:05.735110   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:05.735134   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:05.735141   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:05.735148   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:05.737436   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:05.737454   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:05.737461   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:05.737469   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:05.737477   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:05 GMT
	I0925 10:53:05.737484   97187 round_trippers.go:580]     Audit-Id: 5f0f8702-57c3-4a96-b5e8-38bd294c7905
	I0925 10:53:05.737492   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:05.737503   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:05.737616   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:06.235156   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:06.235176   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:06.235184   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:06.235189   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:06.237461   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:06.237484   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:06.237494   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:06.237502   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:06.237510   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:06.237519   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:06.237528   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:06 GMT
	I0925 10:53:06.237541   97187 round_trippers.go:580]     Audit-Id: 62741cb1-54ca-4930-8593-7d69b79e55fa
	I0925 10:53:06.237677   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:06.735155   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:06.735176   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:06.735183   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:06.735196   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:06.737425   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:06.737440   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:06.737446   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:06.737452   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:06.737457   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:06.737461   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:06.737472   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:06 GMT
	I0925 10:53:06.737477   97187 round_trippers.go:580]     Audit-Id: 36b94fc1-e0e0-49b0-b590-f0368021b213
	I0925 10:53:06.737635   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:07.235199   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:07.235218   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:07.235226   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:07.235233   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:07.237387   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:07.237402   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:07.237408   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:07.237414   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:07.237419   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:07 GMT
	I0925 10:53:07.237424   97187 round_trippers.go:580]     Audit-Id: 96cfcc90-38fa-4018-b822-f0d4a32835e6
	I0925 10:53:07.237429   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:07.237435   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:07.237602   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:07.735174   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:07.735194   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:07.735201   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:07.735207   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:07.737415   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:07.737445   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:07.737455   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:07.737461   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:07.737469   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:07.737482   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:07 GMT
	I0925 10:53:07.737494   97187 round_trippers.go:580]     Audit-Id: 34a0d3ec-d49d-4b14-a384-2b0cc6ef1bd4
	I0925 10:53:07.737506   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:07.737661   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:07.737966   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:53:08.235150   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:08.235183   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:08.235191   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:08.235197   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:08.237600   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:08.237624   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:08.237636   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:08.237646   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:08.237656   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:08.237670   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:08.237683   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:08 GMT
	I0925 10:53:08.237696   97187 round_trippers.go:580]     Audit-Id: 114fbfc7-fa49-421b-89f2-6accb655f57e
	I0925 10:53:08.237846   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:08.735577   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:08.735599   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:08.735609   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:08.735615   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:08.737807   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:08.737830   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:08.737840   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:08.737848   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:08.737856   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:08.737867   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:08.737877   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:08 GMT
	I0925 10:53:08.737882   97187 round_trippers.go:580]     Audit-Id: 8a72df8e-ffd6-4ae3-84a7-c56583025ddf
	I0925 10:53:08.738012   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:09.235693   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:09.235711   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:09.235719   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:09.235726   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:09.238222   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:09.238239   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:09.238246   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:09.238253   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:09 GMT
	I0925 10:53:09.238262   97187 round_trippers.go:580]     Audit-Id: 6f265dd2-f261-4f2c-98a7-47484c3b0f8d
	I0925 10:53:09.238272   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:09.238283   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:09.238292   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:09.238488   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:09.736132   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:09.736173   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:09.736181   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:09.736187   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:09.738328   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:09.738354   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:09.738365   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:09.738373   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:09 GMT
	I0925 10:53:09.738381   97187 round_trippers.go:580]     Audit-Id: 5ea254a7-4d08-486d-ac7f-b6983cc5a25a
	I0925 10:53:09.738397   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:09.738406   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:09.738417   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:09.738522   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:09.738826   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:53:10.235094   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:10.235114   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:10.235122   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:10.235128   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:10.237449   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:10.237468   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:10.237478   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:10 GMT
	I0925 10:53:10.237483   97187 round_trippers.go:580]     Audit-Id: 517f873b-11d6-4488-b70c-d697435e4dc0
	I0925 10:53:10.237490   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:10.237498   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:10.237506   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:10.237514   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:10.237658   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:10.735216   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:10.735237   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:10.735245   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:10.735251   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:10.737521   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:10.737539   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:10.737545   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:10.737551   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:10.737556   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:10 GMT
	I0925 10:53:10.737561   97187 round_trippers.go:580]     Audit-Id: f450c223-b265-4f9e-9f3f-6f9da795c8bb
	I0925 10:53:10.737568   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:10.737574   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:10.737713   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:11.235172   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:11.235196   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:11.235207   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:11.235213   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:11.237662   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:11.237682   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:11.237688   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:11.237693   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:11.237699   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:11.237703   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:11 GMT
	I0925 10:53:11.237708   97187 round_trippers.go:580]     Audit-Id: b7b417e3-6582-41a6-9b6e-d36981b8791b
	I0925 10:53:11.237715   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:11.237843   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:11.735451   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:11.735473   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:11.735481   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:11.735487   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:11.737669   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:11.737692   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:11.737700   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:11.737708   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:11.737716   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:11.737724   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:11 GMT
	I0925 10:53:11.737737   97187 round_trippers.go:580]     Audit-Id: be79d44b-eb02-4b6d-88d6-a9773d43d106
	I0925 10:53:11.737745   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:11.737840   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:12.235462   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:12.235481   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:12.235489   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:12.235495   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:12.237696   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:12.237713   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:12.237720   97187 round_trippers.go:580]     Audit-Id: d1865c5e-067f-4bab-820c-e789daf79d69
	I0925 10:53:12.237725   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:12.237730   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:12.237735   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:12.237741   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:12.237746   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:12 GMT
	I0925 10:53:12.237912   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:12.238274   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:53:12.735153   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:12.735173   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:12.735184   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:12.735192   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:12.737765   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:12.737785   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:12.737794   97187 round_trippers.go:580]     Audit-Id: 69ff91cb-3425-4ad8-b387-b27cfd940f06
	I0925 10:53:12.737802   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:12.737809   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:12.737817   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:12.737828   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:12.737840   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:12 GMT
	I0925 10:53:12.737994   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:13.235595   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:13.235621   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:13.235631   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:13.235639   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:13.237988   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:13.238010   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:13.238019   97187 round_trippers.go:580]     Audit-Id: cbaff60c-43bc-49a0-a395-53da90361426
	I0925 10:53:13.238026   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:13.238034   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:13.238042   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:13.238053   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:13.238065   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:13 GMT
	I0925 10:53:13.238177   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:13.735111   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:13.735134   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:13.735148   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:13.735156   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:13.737420   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:13.737444   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:13.737453   97187 round_trippers.go:580]     Audit-Id: c33aa23e-dfc6-44be-b205-95a30b8e1ceb
	I0925 10:53:13.737461   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:13.737469   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:13.737477   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:13.737487   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:13.737495   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:13 GMT
	I0925 10:53:13.737671   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:14.235215   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:14.235238   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:14.235245   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:14.235251   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:14.237655   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:14.237676   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:14.237685   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:14.237696   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:14.237705   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:14 GMT
	I0925 10:53:14.237716   97187 round_trippers.go:580]     Audit-Id: b3a60d02-5059-42ca-b67c-f8e72c42d73e
	I0925 10:53:14.237724   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:14.237740   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:14.237864   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:14.735448   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:14.735472   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:14.735486   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:14.735496   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:14.737761   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:14.737780   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:14.737789   97187 round_trippers.go:580]     Audit-Id: 5865f934-e4d3-4c14-87bc-9a1f702ddd14
	I0925 10:53:14.737796   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:14.737804   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:14.737813   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:14.737823   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:14.737836   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:14 GMT
	I0925 10:53:14.737950   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:14.738259   97187 node_ready.go:58] node "multinode-529126" has status "Ready":"False"
	I0925 10:53:15.235789   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:15.235807   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:15.235815   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:15.235821   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:15.238017   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:15.238036   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:15.238045   97187 round_trippers.go:580]     Audit-Id: 7dd000e4-e637-4a47-9aa1-fad948ae2622
	I0925 10:53:15.238053   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:15.238062   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:15.238071   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:15.238084   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:15.238095   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:15 GMT
	I0925 10:53:15.238220   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:15.735902   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:15.735927   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:15.735942   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:15.735953   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:15.738175   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:15.738193   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:15.738199   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:15.738204   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:15.738210   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:15.738215   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:15.738221   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:15 GMT
	I0925 10:53:15.738228   97187 round_trippers.go:580]     Audit-Id: d0657a2a-9f68-4008-964e-28df1bef82df
	I0925 10:53:15.738378   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:16.235087   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:16.235111   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:16.235119   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:16.235125   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:16.237411   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:16.237434   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:16.237444   97187 round_trippers.go:580]     Audit-Id: 2f0f5244-20da-45b3-8412-52819d31685a
	I0925 10:53:16.237453   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:16.237461   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:16.237473   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:16.237481   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:16.237494   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:16 GMT
	I0925 10:53:16.237634   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:16.735237   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:16.735257   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:16.735265   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:16.735271   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:16.737350   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:16.737371   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:16.737378   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:16.737384   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:16.737389   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:16.737394   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:16.737401   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:16 GMT
	I0925 10:53:16.737406   97187 round_trippers.go:580]     Audit-Id: a9228f75-f3b7-49e6-b28d-55c81b4be387
	I0925 10:53:16.737546   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"361","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0925 10:53:17.235125   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:17.235148   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:17.235155   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:17.235161   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:17.237345   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:17.237370   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:17.237380   97187 round_trippers.go:580]     Audit-Id: 2ffd10df-4e54-4fa0-a769-6f09751ab846
	I0925 10:53:17.237388   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:17.237400   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:17.237407   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:17.237415   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:17.237427   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:17 GMT
	I0925 10:53:17.237541   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:17.237849   97187 node_ready.go:49] node "multinode-529126" has status "Ready":"True"
	I0925 10:53:17.237864   97187 node_ready.go:38] duration metric: took 32.008575589s waiting for node "multinode-529126" to be "Ready" ...
	I0925 10:53:17.237872   97187 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 10:53:17.237931   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0925 10:53:17.237939   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:17.237946   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:17.237952   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:17.241046   97187 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0925 10:53:17.241063   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:17.241069   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:17.241075   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:17.241080   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:17 GMT
	I0925 10:53:17.241085   97187 round_trippers.go:580]     Audit-Id: 2efb93e8-d932-4227-b6de-f0d3c6a3a1a2
	I0925 10:53:17.241090   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:17.241095   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:17.241607   97187 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"429","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0925 10:53:17.244597   97187 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bl6dx" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:17.244700   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bl6dx
	I0925 10:53:17.244713   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:17.244725   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:17.244737   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:17.246722   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:17.246742   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:17.246752   97187 round_trippers.go:580]     Audit-Id: 2082009c-3d90-4d16-b860-c3b824fc3d36
	I0925 10:53:17.246760   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:17.246769   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:17.246784   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:17.246792   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:17.246804   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:17 GMT
	I0925 10:53:17.246917   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"429","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0925 10:53:17.247326   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:17.247340   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:17.247347   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:17.247354   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:17.248989   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:17.249004   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:17.249010   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:17.249015   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:17.249020   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:17 GMT
	I0925 10:53:17.249025   97187 round_trippers.go:580]     Audit-Id: 5309d102-5cc6-4b2c-a7ca-eef1f5b38b42
	I0925 10:53:17.249031   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:17.249036   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:17.249138   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:17.249428   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bl6dx
	I0925 10:53:17.249438   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:17.249445   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:17.249450   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:17.251069   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:17.251088   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:17.251098   97187 round_trippers.go:580]     Audit-Id: fe66dcde-2862-442d-84b4-1a3921dfb0c9
	I0925 10:53:17.251107   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:17.251119   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:17.251128   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:17.251140   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:17.251152   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:17 GMT
	I0925 10:53:17.251274   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"429","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0925 10:53:17.251619   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:17.251629   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:17.251636   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:17.251642   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:17.253097   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:17.253113   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:17.253122   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:17.253129   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:17.253137   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:17.253147   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:17.253160   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:17 GMT
	I0925 10:53:17.253172   97187 round_trippers.go:580]     Audit-Id: 9f65b158-9ef6-4ee0-957a-b12c0afc119d
	I0925 10:53:17.253353   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:17.754386   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bl6dx
	I0925 10:53:17.754405   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:17.754413   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:17.754418   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:17.756813   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:17.756832   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:17.756839   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:17.756844   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:17.756849   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:17.756854   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:17 GMT
	I0925 10:53:17.756859   97187 round_trippers.go:580]     Audit-Id: 58439a0a-66dd-483b-a1e5-55e211082ac1
	I0925 10:53:17.756864   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:17.757020   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"429","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0925 10:53:17.757438   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:17.757447   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:17.757455   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:17.757463   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:17.759327   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:17.759342   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:17.759349   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:17.759354   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:17.759359   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:17.759364   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:17 GMT
	I0925 10:53:17.759369   97187 round_trippers.go:580]     Audit-Id: 23e52ab1-faf9-4f0b-a3c2-f398f7081d39
	I0925 10:53:17.759375   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:17.759512   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:18.254153   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bl6dx
	I0925 10:53:18.254174   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.254182   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.254188   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.256517   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:18.256536   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.256545   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.256552   97187 round_trippers.go:580]     Audit-Id: c27df112-13c4-40ee-bda6-83dcd9bf2afe
	I0925 10:53:18.256562   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.256573   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.256581   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.256592   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.256779   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"442","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0925 10:53:18.257210   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:18.257220   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.257229   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.257237   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.258972   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:18.258992   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.259001   97187 round_trippers.go:580]     Audit-Id: f1f48427-b875-4397-9c5d-1749ce4b840b
	I0925 10:53:18.259008   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.259015   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.259022   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.259030   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.259037   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.259173   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:18.259460   97187 pod_ready.go:92] pod "coredns-5dd5756b68-bl6dx" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:18.259475   97187 pod_ready.go:81] duration metric: took 1.014854237s waiting for pod "coredns-5dd5756b68-bl6dx" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.259483   97187 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.259522   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-529126
	I0925 10:53:18.259529   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.259535   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.259541   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.261148   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:18.261162   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.261168   97187 round_trippers.go:580]     Audit-Id: 2b048a08-56b3-4a65-9bd2-abcc32d4d7e8
	I0925 10:53:18.261174   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.261179   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.261184   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.261189   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.261196   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.261342   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-529126","namespace":"kube-system","uid":"183f855c-8718-4c7f-a90c-5491729da613","resourceVersion":"352","creationTimestamp":"2023-09-25T10:52:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"7310fa126167d348c6d813d092a2c83e","kubernetes.io/config.mirror":"7310fa126167d348c6d813d092a2c83e","kubernetes.io/config.seen":"2023-09-25T10:52:32.952572508Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0925 10:53:18.261709   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:18.261723   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.261733   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.261742   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.263272   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:18.263285   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.263291   97187 round_trippers.go:580]     Audit-Id: 2c1f9460-33c6-407a-b809-a640d5fee3d9
	I0925 10:53:18.263296   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.263301   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.263307   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.263312   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.263317   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.263440   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:18.263723   97187 pod_ready.go:92] pod "etcd-multinode-529126" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:18.263737   97187 pod_ready.go:81] duration metric: took 4.247879ms waiting for pod "etcd-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.263752   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.263802   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-529126
	I0925 10:53:18.263812   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.263823   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.263834   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.265375   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:18.265387   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.265393   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.265398   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.265403   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.265408   97187 round_trippers.go:580]     Audit-Id: 68bf151d-9f45-4aeb-9f8b-24dbeec8d71d
	I0925 10:53:18.265415   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.265430   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.265579   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-529126","namespace":"kube-system","uid":"19c42393-64d3-470e-9f21-aad8c233bf42","resourceVersion":"327","creationTimestamp":"2023-09-25T10:52:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ad3d553cc6255758ba9dea20bc3a62bd","kubernetes.io/config.mirror":"ad3d553cc6255758ba9dea20bc3a62bd","kubernetes.io/config.seen":"2023-09-25T10:52:32.952574240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0925 10:53:18.265985   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:18.266000   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.266010   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.266032   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.267541   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:18.267554   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.267560   97187 round_trippers.go:580]     Audit-Id: c152f714-7e2f-4264-ad4a-8e8f599e6b4b
	I0925 10:53:18.267565   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.267571   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.267579   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.267585   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.267595   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.267703   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:18.267974   97187 pod_ready.go:92] pod "kube-apiserver-multinode-529126" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:18.267987   97187 pod_ready.go:81] duration metric: took 4.225618ms waiting for pod "kube-apiserver-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.267995   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.268042   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-529126
	I0925 10:53:18.268050   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.268056   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.268062   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.273642   97187 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0925 10:53:18.273657   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.273665   97187 round_trippers.go:580]     Audit-Id: 51c1f820-6de0-4086-8f13-aab08fe11e82
	I0925 10:53:18.273673   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.273682   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.273691   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.273700   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.273709   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.273950   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-529126","namespace":"kube-system","uid":"8091d853-28f2-45bf-924a-88a9809f836e","resourceVersion":"306","creationTimestamp":"2023-09-25T10:52:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f3d738b53ad837e74ec7a881fba0aa09","kubernetes.io/config.mirror":"f3d738b53ad837e74ec7a881fba0aa09","kubernetes.io/config.seen":"2023-09-25T10:52:32.952575789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0925 10:53:18.435622   97187 request.go:629] Waited for 161.312413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:18.435674   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:18.435681   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.435689   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.435699   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.437866   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:18.437883   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.437890   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.437896   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.437901   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.437906   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.437911   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.437916   97187 round_trippers.go:580]     Audit-Id: 0247866b-a495-40cd-9d3a-5e31bac25946
	I0925 10:53:18.438102   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:18.438398   97187 pod_ready.go:92] pod "kube-controller-manager-multinode-529126" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:18.438413   97187 pod_ready.go:81] duration metric: took 170.411748ms waiting for pod "kube-controller-manager-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.438423   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wlsv6" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.635836   97187 request.go:629] Waited for 197.350421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlsv6
	I0925 10:53:18.635905   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlsv6
	I0925 10:53:18.635913   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.635921   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.635927   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.638267   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:18.638284   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.638290   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.638297   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.638302   97187 round_trippers.go:580]     Audit-Id: 96db39a8-3987-4ee8-81c6-a813251d7484
	I0925 10:53:18.638307   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.638312   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.638318   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.638503   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wlsv6","generateName":"kube-proxy-","namespace":"kube-system","uid":"e04d98ce-ec4c-4cb4-8ae8-329b6240c025","resourceVersion":"408","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"efa2104a-efe3-45ad-b54a-2bd7d8d60a92","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"efa2104a-efe3-45ad-b54a-2bd7d8d60a92\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0925 10:53:18.835875   97187 request.go:629] Waited for 196.908547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:18.835933   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:18.835938   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:18.835945   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:18.835952   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:18.838245   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:18.838263   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:18.838269   97187 round_trippers.go:580]     Audit-Id: cef26e99-ad28-4b75-8ab0-260a0fee9daf
	I0925 10:53:18.838274   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:18.838280   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:18.838284   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:18.838289   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:18.838295   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:18 GMT
	I0925 10:53:18.838467   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:18.838772   97187 pod_ready.go:92] pod "kube-proxy-wlsv6" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:18.838791   97187 pod_ready.go:81] duration metric: took 400.354835ms waiting for pod "kube-proxy-wlsv6" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:18.838803   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:19.035180   97187 request.go:629] Waited for 196.306345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-529126
	I0925 10:53:19.035277   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-529126
	I0925 10:53:19.035289   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:19.035299   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:19.035312   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:19.037591   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:19.037610   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:19.037616   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:19.037621   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:19.037627   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:19.037632   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:19 GMT
	I0925 10:53:19.037637   97187 round_trippers.go:580]     Audit-Id: 855355fd-9368-46ee-a867-a61b5542d3e0
	I0925 10:53:19.037642   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:19.037789   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-529126","namespace":"kube-system","uid":"36d25961-075e-4692-8ac5-bc14a734e7e0","resourceVersion":"319","creationTimestamp":"2023-09-25T10:52:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef4bf696b4d00407d5ead8e1c16c7583","kubernetes.io/config.mirror":"ef4bf696b4d00407d5ead8e1c16c7583","kubernetes.io/config.seen":"2023-09-25T10:52:32.952567913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0925 10:53:19.235501   97187 request.go:629] Waited for 197.354164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:19.235586   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:19.235597   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:19.235604   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:19.235610   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:19.237899   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:19.237915   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:19.237924   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:19.237929   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:19.237934   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:19 GMT
	I0925 10:53:19.237939   97187 round_trippers.go:580]     Audit-Id: eb3e25f5-0ef4-44f2-bd84-cb074979a1b2
	I0925 10:53:19.237945   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:19.237950   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:19.238075   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:19.238364   97187 pod_ready.go:92] pod "kube-scheduler-multinode-529126" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:19.238374   97187 pod_ready.go:81] duration metric: took 399.562154ms waiting for pod "kube-scheduler-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:19.238383   97187 pod_ready.go:38] duration metric: took 2.000492458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 10:53:19.238396   97187 api_server.go:52] waiting for apiserver process to appear ...
	I0925 10:53:19.238443   97187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 10:53:19.247949   97187 command_runner.go:130] > 1420
	I0925 10:53:19.248709   97187 api_server.go:72] duration metric: took 34.075472072s to wait for apiserver process to appear ...
	I0925 10:53:19.248727   97187 api_server.go:88] waiting for apiserver healthz status ...
	I0925 10:53:19.248747   97187 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0925 10:53:19.252693   97187 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0925 10:53:19.252750   97187 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0925 10:53:19.252757   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:19.252764   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:19.252772   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:19.253644   97187 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0925 10:53:19.253665   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:19.253674   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:19.253682   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:19.253690   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:19.253695   97187 round_trippers.go:580]     Content-Length: 263
	I0925 10:53:19.253702   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:19 GMT
	I0925 10:53:19.253707   97187 round_trippers.go:580]     Audit-Id: 9e7acbbd-6a7a-4ad9-992d-5292c91d71e8
	I0925 10:53:19.253715   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:19.253735   97187 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0925 10:53:19.253839   97187 api_server.go:141] control plane version: v1.28.2
	I0925 10:53:19.253868   97187 api_server.go:131] duration metric: took 5.133621ms to wait for apiserver health ...
	I0925 10:53:19.253880   97187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 10:53:19.435192   97187 request.go:629] Waited for 181.250943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0925 10:53:19.435248   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0925 10:53:19.435253   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:19.435272   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:19.435278   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:19.438386   97187 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0925 10:53:19.438408   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:19.438425   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:19 GMT
	I0925 10:53:19.438434   97187 round_trippers.go:580]     Audit-Id: 60439890-2656-4922-8c8d-1aa4d3419212
	I0925 10:53:19.438439   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:19.438444   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:19.438449   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:19.438455   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:19.440886   97187 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"442","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0925 10:53:19.442571   97187 system_pods.go:59] 8 kube-system pods found
	I0925 10:53:19.442591   97187 system_pods.go:61] "coredns-5dd5756b68-bl6dx" [a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274] Running
	I0925 10:53:19.442595   97187 system_pods.go:61] "etcd-multinode-529126" [183f855c-8718-4c7f-a90c-5491729da613] Running
	I0925 10:53:19.442600   97187 system_pods.go:61] "kindnet-62xf8" [23f29aa7-de9c-43bc-950c-59009bd0d74e] Running
	I0925 10:53:19.442603   97187 system_pods.go:61] "kube-apiserver-multinode-529126" [19c42393-64d3-470e-9f21-aad8c233bf42] Running
	I0925 10:53:19.442609   97187 system_pods.go:61] "kube-controller-manager-multinode-529126" [8091d853-28f2-45bf-924a-88a9809f836e] Running
	I0925 10:53:19.442615   97187 system_pods.go:61] "kube-proxy-wlsv6" [e04d98ce-ec4c-4cb4-8ae8-329b6240c025] Running
	I0925 10:53:19.442619   97187 system_pods.go:61] "kube-scheduler-multinode-529126" [36d25961-075e-4692-8ac5-bc14a734e7e0] Running
	I0925 10:53:19.442625   97187 system_pods.go:61] "storage-provisioner" [04177a18-0dee-40d2-aa22-df41fb209e8c] Running
	I0925 10:53:19.442631   97187 system_pods.go:74] duration metric: took 188.746643ms to wait for pod list to return data ...
	I0925 10:53:19.442640   97187 default_sa.go:34] waiting for default service account to be created ...
	I0925 10:53:19.636010   97187 request.go:629] Waited for 193.305794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0925 10:53:19.636085   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0925 10:53:19.636095   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:19.636107   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:19.636122   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:19.638376   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:19.638400   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:19.638411   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:19 GMT
	I0925 10:53:19.638419   97187 round_trippers.go:580]     Audit-Id: 72a4f8c4-1569-4bf3-a372-8592362b4d07
	I0925 10:53:19.638427   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:19.638435   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:19.638443   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:19.638457   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:19.638470   97187 round_trippers.go:580]     Content-Length: 261
	I0925 10:53:19.638503   97187 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4bb9613d-7c44-4b67-9658-cc0b9cb5f339","resourceVersion":"332","creationTimestamp":"2023-09-25T10:52:45Z"}}]}
	I0925 10:53:19.638699   97187 default_sa.go:45] found service account: "default"
	I0925 10:53:19.638716   97187 default_sa.go:55] duration metric: took 196.067429ms for default service account to be created ...
	I0925 10:53:19.638726   97187 system_pods.go:116] waiting for k8s-apps to be running ...
	I0925 10:53:19.836149   97187 request.go:629] Waited for 197.354809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0925 10:53:19.836216   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0925 10:53:19.836227   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:19.836234   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:19.836241   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:19.839302   97187 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0925 10:53:19.839332   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:19.839339   97187 round_trippers.go:580]     Audit-Id: c1518630-9611-4da1-ae07-4cd4621eaa28
	I0925 10:53:19.839345   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:19.839350   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:19.839355   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:19.839361   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:19.839367   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:19 GMT
	I0925 10:53:19.839854   97187 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"442","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0925 10:53:19.842263   97187 system_pods.go:86] 8 kube-system pods found
	I0925 10:53:19.842287   97187 system_pods.go:89] "coredns-5dd5756b68-bl6dx" [a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274] Running
	I0925 10:53:19.842295   97187 system_pods.go:89] "etcd-multinode-529126" [183f855c-8718-4c7f-a90c-5491729da613] Running
	I0925 10:53:19.842312   97187 system_pods.go:89] "kindnet-62xf8" [23f29aa7-de9c-43bc-950c-59009bd0d74e] Running
	I0925 10:53:19.842320   97187 system_pods.go:89] "kube-apiserver-multinode-529126" [19c42393-64d3-470e-9f21-aad8c233bf42] Running
	I0925 10:53:19.842331   97187 system_pods.go:89] "kube-controller-manager-multinode-529126" [8091d853-28f2-45bf-924a-88a9809f836e] Running
	I0925 10:53:19.842340   97187 system_pods.go:89] "kube-proxy-wlsv6" [e04d98ce-ec4c-4cb4-8ae8-329b6240c025] Running
	I0925 10:53:19.842347   97187 system_pods.go:89] "kube-scheduler-multinode-529126" [36d25961-075e-4692-8ac5-bc14a734e7e0] Running
	I0925 10:53:19.842353   97187 system_pods.go:89] "storage-provisioner" [04177a18-0dee-40d2-aa22-df41fb209e8c] Running
	I0925 10:53:19.842365   97187 system_pods.go:126] duration metric: took 203.633268ms to wait for k8s-apps to be running ...
	I0925 10:53:19.842377   97187 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 10:53:19.842429   97187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:53:19.852503   97187 system_svc.go:56] duration metric: took 10.120812ms WaitForService to wait for kubelet.
	I0925 10:53:19.852524   97187 kubeadm.go:581] duration metric: took 34.679291056s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 10:53:19.852552   97187 node_conditions.go:102] verifying NodePressure condition ...
	I0925 10:53:20.035976   97187 request.go:629] Waited for 183.352397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0925 10:53:20.036045   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0925 10:53:20.036052   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:20.036059   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:20.036066   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:20.038376   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:20.038397   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:20.038408   97187 round_trippers.go:580]     Audit-Id: 801addd4-85a6-4fe4-858d-05500f131b56
	I0925 10:53:20.038416   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:20.038423   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:20.038428   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:20.038434   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:20.038443   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:20 GMT
	I0925 10:53:20.038563   97187 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0925 10:53:20.039025   97187 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0925 10:53:20.039045   97187 node_conditions.go:123] node cpu capacity is 8
	I0925 10:53:20.039060   97187 node_conditions.go:105] duration metric: took 186.501793ms to run NodePressure ...
	I0925 10:53:20.039077   97187 start.go:228] waiting for startup goroutines ...
	I0925 10:53:20.039093   97187 start.go:233] waiting for cluster config update ...
	I0925 10:53:20.039107   97187 start.go:242] writing updated cluster config ...
	I0925 10:53:20.041396   97187 out.go:177] 
	I0925 10:53:20.043289   97187 config.go:182] Loaded profile config "multinode-529126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:53:20.043368   97187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/config.json ...
	I0925 10:53:20.045124   97187 out.go:177] * Starting worker node multinode-529126-m02 in cluster multinode-529126
	I0925 10:53:20.046327   97187 cache.go:122] Beginning downloading kic base image for docker with crio
	I0925 10:53:20.047597   97187 out.go:177] * Pulling base image ...
	I0925 10:53:20.049170   97187 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:53:20.049192   97187 cache.go:57] Caching tarball of preloaded images
	I0925 10:53:20.049200   97187 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0925 10:53:20.049276   97187 preload.go:174] Found /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0925 10:53:20.049287   97187 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0925 10:53:20.049357   97187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/config.json ...
	I0925 10:53:20.064963   97187 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I0925 10:53:20.064984   97187 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I0925 10:53:20.065001   97187 cache.go:195] Successfully downloaded all kic artifacts
	I0925 10:53:20.065030   97187 start.go:365] acquiring machines lock for multinode-529126-m02: {Name:mk9ac2189cfde0201dcd8d942447815e3dbcfc38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 10:53:20.065131   97187 start.go:369] acquired machines lock for "multinode-529126-m02" in 82.401µs
	I0925 10:53:20.065156   97187 start.go:93] Provisioning new machine with config: &{Name:multinode-529126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-529126 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0925 10:53:20.065232   97187 start.go:125] createHost starting for "m02" (driver="docker")
	I0925 10:53:20.067090   97187 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0925 10:53:20.067182   97187 start.go:159] libmachine.API.Create for "multinode-529126" (driver="docker")
	I0925 10:53:20.067206   97187 client.go:168] LocalClient.Create starting
	I0925 10:53:20.067279   97187 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem
	I0925 10:53:20.067307   97187 main.go:141] libmachine: Decoding PEM data...
	I0925 10:53:20.067322   97187 main.go:141] libmachine: Parsing certificate...
	I0925 10:53:20.067372   97187 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem
	I0925 10:53:20.067391   97187 main.go:141] libmachine: Decoding PEM data...
	I0925 10:53:20.067402   97187 main.go:141] libmachine: Parsing certificate...
	I0925 10:53:20.067565   97187 cli_runner.go:164] Run: docker network inspect multinode-529126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0925 10:53:20.083281   97187 network_create.go:76] Found existing network {name:multinode-529126 subnet:0xc0013aeb40 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0925 10:53:20.083316   97187 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-529126-m02" container
	I0925 10:53:20.083368   97187 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0925 10:53:20.098899   97187 cli_runner.go:164] Run: docker volume create multinode-529126-m02 --label name.minikube.sigs.k8s.io=multinode-529126-m02 --label created_by.minikube.sigs.k8s.io=true
	I0925 10:53:20.115358   97187 oci.go:103] Successfully created a docker volume multinode-529126-m02
	I0925 10:53:20.115424   97187 cli_runner.go:164] Run: docker run --rm --name multinode-529126-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-529126-m02 --entrypoint /usr/bin/test -v multinode-529126-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0925 10:53:20.624258   97187 oci.go:107] Successfully prepared a docker volume multinode-529126-m02
	I0925 10:53:20.624313   97187 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:53:20.624333   97187 kic.go:190] Starting extracting preloaded images to volume ...
	I0925 10:53:20.624397   97187 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-529126-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0925 10:53:25.599149   97187 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-529126-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.974710024s)
	I0925 10:53:25.599182   97187 kic.go:199] duration metric: took 4.974846 seconds to extract preloaded images to volume
	W0925 10:53:25.599309   97187 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0925 10:53:25.599394   97187 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0925 10:53:25.649513   97187 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-529126-m02 --name multinode-529126-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-529126-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-529126-m02 --network multinode-529126 --ip 192.168.58.3 --volume multinode-529126-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0925 10:53:25.946744   97187 cli_runner.go:164] Run: docker container inspect multinode-529126-m02 --format={{.State.Running}}
	I0925 10:53:25.966055   97187 cli_runner.go:164] Run: docker container inspect multinode-529126-m02 --format={{.State.Status}}
	I0925 10:53:25.982994   97187 cli_runner.go:164] Run: docker exec multinode-529126-m02 stat /var/lib/dpkg/alternatives/iptables
	I0925 10:53:26.022921   97187 oci.go:144] the created container "multinode-529126-m02" has a running status.
	I0925 10:53:26.022960   97187 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa...
	I0925 10:53:26.115340   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0925 10:53:26.115391   97187 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0925 10:53:26.135247   97187 cli_runner.go:164] Run: docker container inspect multinode-529126-m02 --format={{.State.Status}}
	I0925 10:53:26.152927   97187 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0925 10:53:26.152956   97187 kic_runner.go:114] Args: [docker exec --privileged multinode-529126-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0925 10:53:26.217401   97187 cli_runner.go:164] Run: docker container inspect multinode-529126-m02 --format={{.State.Status}}
	I0925 10:53:26.233494   97187 machine.go:88] provisioning docker machine ...
	I0925 10:53:26.233529   97187 ubuntu.go:169] provisioning hostname "multinode-529126-m02"
	I0925 10:53:26.233597   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:53:26.254580   97187 main.go:141] libmachine: Using SSH client type: native
	I0925 10:53:26.254903   97187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0925 10:53:26.254918   97187 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-529126-m02 && echo "multinode-529126-m02" | sudo tee /etc/hostname
	I0925 10:53:26.255572   97187 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59736->127.0.0.1:32852: read: connection reset by peer
	I0925 10:53:29.390703   97187 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-529126-m02
	
	I0925 10:53:29.390779   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:53:29.408288   97187 main.go:141] libmachine: Using SSH client type: native
	I0925 10:53:29.408751   97187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0925 10:53:29.408774   97187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-529126-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-529126-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-529126-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 10:53:29.536529   97187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 10:53:29.536559   97187 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17297-5744/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-5744/.minikube}
	I0925 10:53:29.536580   97187 ubuntu.go:177] setting up certificates
	I0925 10:53:29.536592   97187 provision.go:83] configureAuth start
	I0925 10:53:29.536670   97187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-529126-m02
	I0925 10:53:29.552352   97187 provision.go:138] copyHostCerts
	I0925 10:53:29.552384   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem
	I0925 10:53:29.552416   97187 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem, removing ...
	I0925 10:53:29.552422   97187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem
	I0925 10:53:29.552492   97187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem (1078 bytes)
	I0925 10:53:29.552581   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem
	I0925 10:53:29.552600   97187 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem, removing ...
	I0925 10:53:29.552605   97187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem
	I0925 10:53:29.552653   97187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem (1123 bytes)
	I0925 10:53:29.552743   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem
	I0925 10:53:29.552761   97187 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem, removing ...
	I0925 10:53:29.552765   97187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem
	I0925 10:53:29.552789   97187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem (1675 bytes)
	I0925 10:53:29.552836   97187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem org=jenkins.multinode-529126-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-529126-m02]
	I0925 10:53:29.614079   97187 provision.go:172] copyRemoteCerts
	I0925 10:53:29.614147   97187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 10:53:29.614184   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:53:29.630072   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa Username:docker}
	I0925 10:53:29.724647   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0925 10:53:29.724713   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 10:53:29.745050   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0925 10:53:29.745100   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0925 10:53:29.765597   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0925 10:53:29.765650   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 10:53:29.785812   97187 provision.go:86] duration metric: configureAuth took 249.206806ms
	I0925 10:53:29.785836   97187 ubuntu.go:193] setting minikube options for container-runtime
	I0925 10:53:29.786007   97187 config.go:182] Loaded profile config "multinode-529126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:53:29.786094   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:53:29.802140   97187 main.go:141] libmachine: Using SSH client type: native
	I0925 10:53:29.802545   97187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0925 10:53:29.802566   97187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0925 10:53:30.013217   97187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0925 10:53:30.013241   97187 machine.go:91] provisioned docker machine in 3.779720597s
	I0925 10:53:30.013248   97187 client.go:171] LocalClient.Create took 9.946035709s
	I0925 10:53:30.013266   97187 start.go:167] duration metric: libmachine.API.Create for "multinode-529126" took 9.946084629s
	I0925 10:53:30.013275   97187 start.go:300] post-start starting for "multinode-529126-m02" (driver="docker")
	I0925 10:53:30.013290   97187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 10:53:30.013379   97187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 10:53:30.013417   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:53:30.030397   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa Username:docker}
	I0925 10:53:30.121217   97187 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 10:53:30.124224   97187 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0925 10:53:30.124247   97187 command_runner.go:130] > NAME="Ubuntu"
	I0925 10:53:30.124255   97187 command_runner.go:130] > VERSION_ID="22.04"
	I0925 10:53:30.124260   97187 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0925 10:53:30.124267   97187 command_runner.go:130] > VERSION_CODENAME=jammy
	I0925 10:53:30.124273   97187 command_runner.go:130] > ID=ubuntu
	I0925 10:53:30.124279   97187 command_runner.go:130] > ID_LIKE=debian
	I0925 10:53:30.124285   97187 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0925 10:53:30.124293   97187 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0925 10:53:30.124303   97187 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0925 10:53:30.124319   97187 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0925 10:53:30.124329   97187 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0925 10:53:30.124381   97187 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0925 10:53:30.124419   97187 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0925 10:53:30.124437   97187 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0925 10:53:30.124447   97187 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0925 10:53:30.124457   97187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/addons for local assets ...
	I0925 10:53:30.124529   97187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/files for local assets ...
	I0925 10:53:30.124611   97187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> 125162.pem in /etc/ssl/certs
	I0925 10:53:30.124620   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> /etc/ssl/certs/125162.pem
	I0925 10:53:30.124735   97187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 10:53:30.132201   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem --> /etc/ssl/certs/125162.pem (1708 bytes)
	I0925 10:53:30.153023   97187 start.go:303] post-start completed in 139.731275ms
	I0925 10:53:30.153367   97187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-529126-m02
	I0925 10:53:30.168569   97187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/config.json ...
	I0925 10:53:30.168862   97187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 10:53:30.168915   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:53:30.183529   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa Username:docker}
	I0925 10:53:30.273033   97187 command_runner.go:130] > 19%!
	(MISSING)I0925 10:53:30.273220   97187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0925 10:53:30.277180   97187 command_runner.go:130] > 236G
	I0925 10:53:30.277204   97187 start.go:128] duration metric: createHost completed in 10.211964622s
	I0925 10:53:30.277215   97187 start.go:83] releasing machines lock for "multinode-529126-m02", held for 10.212071538s
	I0925 10:53:30.277273   97187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-529126-m02
	I0925 10:53:30.295363   97187 out.go:177] * Found network options:
	I0925 10:53:30.296903   97187 out.go:177]   - NO_PROXY=192.168.58.2
	W0925 10:53:30.298286   97187 proxy.go:119] fail to check proxy env: Error ip not in block
	W0925 10:53:30.298326   97187 proxy.go:119] fail to check proxy env: Error ip not in block
	I0925 10:53:30.298391   97187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0925 10:53:30.298424   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:53:30.298452   97187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 10:53:30.298497   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:53:30.316886   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa Username:docker}
	I0925 10:53:30.317005   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa Username:docker}
	I0925 10:53:30.536683   97187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0925 10:53:30.536683   97187 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0925 10:53:30.540748   97187 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0925 10:53:30.540768   97187 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0925 10:53:30.540780   97187 command_runner.go:130] > Device: b0h/176d	Inode: 540251      Links: 1
	I0925 10:53:30.540787   97187 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0925 10:53:30.540792   97187 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0925 10:53:30.540800   97187 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0925 10:53:30.540807   97187 command_runner.go:130] > Change: 2023-09-25 10:33:46.731088186 +0000
	I0925 10:53:30.540815   97187 command_runner.go:130] >  Birth: 2023-09-25 10:33:46.731088186 +0000
	I0925 10:53:30.540872   97187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 10:53:30.557961   97187 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0925 10:53:30.558042   97187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 10:53:30.583142   97187 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0925 10:53:30.583204   97187 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0925 10:53:30.583216   97187 start.go:469] detecting cgroup driver to use...
	I0925 10:53:30.583254   97187 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0925 10:53:30.583302   97187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 10:53:30.595962   97187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 10:53:30.605359   97187 docker.go:197] disabling cri-docker service (if available) ...
	I0925 10:53:30.605404   97187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0925 10:53:30.617005   97187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0925 10:53:30.629316   97187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0925 10:53:30.704759   97187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0925 10:53:30.787655   97187 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0925 10:53:30.787685   97187 docker.go:213] disabling docker service ...
	I0925 10:53:30.787720   97187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0925 10:53:30.803793   97187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0925 10:53:30.813447   97187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0925 10:53:30.884146   97187 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0925 10:53:30.884214   97187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0925 10:53:30.894381   97187 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0925 10:53:30.966068   97187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0925 10:53:30.975862   97187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 10:53:30.989380   97187 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0925 10:53:30.989435   97187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0925 10:53:30.989480   97187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:53:30.997563   97187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0925 10:53:30.997614   97187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:53:31.005630   97187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:53:31.013451   97187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 10:53:31.021518   97187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 10:53:31.028990   97187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 10:53:31.035305   97187 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0925 10:53:31.035857   97187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 10:53:31.042915   97187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 10:53:31.115865   97187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0925 10:53:31.207581   97187 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0925 10:53:31.207637   97187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0925 10:53:31.210664   97187 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0925 10:53:31.210691   97187 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0925 10:53:31.210702   97187 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I0925 10:53:31.210715   97187 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0925 10:53:31.210725   97187 command_runner.go:130] > Access: 2023-09-25 10:53:31.192711385 +0000
	I0925 10:53:31.210742   97187 command_runner.go:130] > Modify: 2023-09-25 10:53:31.192711385 +0000
	I0925 10:53:31.210750   97187 command_runner.go:130] > Change: 2023-09-25 10:53:31.192711385 +0000
	I0925 10:53:31.210758   97187 command_runner.go:130] >  Birth: -
	I0925 10:53:31.210802   97187 start.go:537] Will wait 60s for crictl version
	I0925 10:53:31.210841   97187 ssh_runner.go:195] Run: which crictl
	I0925 10:53:31.213731   97187 command_runner.go:130] > /usr/bin/crictl
	I0925 10:53:31.213789   97187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 10:53:31.242311   97187 command_runner.go:130] > Version:  0.1.0
	I0925 10:53:31.242329   97187 command_runner.go:130] > RuntimeName:  cri-o
	I0925 10:53:31.242334   97187 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0925 10:53:31.242339   97187 command_runner.go:130] > RuntimeApiVersion:  v1
	I0925 10:53:31.243984   97187 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0925 10:53:31.244067   97187 ssh_runner.go:195] Run: crio --version
	I0925 10:53:31.275496   97187 command_runner.go:130] > crio version 1.24.6
	I0925 10:53:31.275518   97187 command_runner.go:130] > Version:          1.24.6
	I0925 10:53:31.275524   97187 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0925 10:53:31.275532   97187 command_runner.go:130] > GitTreeState:     clean
	I0925 10:53:31.275537   97187 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0925 10:53:31.275542   97187 command_runner.go:130] > GoVersion:        go1.18.2
	I0925 10:53:31.275549   97187 command_runner.go:130] > Compiler:         gc
	I0925 10:53:31.275557   97187 command_runner.go:130] > Platform:         linux/amd64
	I0925 10:53:31.275566   97187 command_runner.go:130] > Linkmode:         dynamic
	I0925 10:53:31.275580   97187 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0925 10:53:31.275592   97187 command_runner.go:130] > SeccompEnabled:   true
	I0925 10:53:31.275599   97187 command_runner.go:130] > AppArmorEnabled:  false
	I0925 10:53:31.275662   97187 ssh_runner.go:195] Run: crio --version
	I0925 10:53:31.307117   97187 command_runner.go:130] > crio version 1.24.6
	I0925 10:53:31.307143   97187 command_runner.go:130] > Version:          1.24.6
	I0925 10:53:31.307158   97187 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0925 10:53:31.307164   97187 command_runner.go:130] > GitTreeState:     clean
	I0925 10:53:31.307173   97187 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0925 10:53:31.307182   97187 command_runner.go:130] > GoVersion:        go1.18.2
	I0925 10:53:31.307192   97187 command_runner.go:130] > Compiler:         gc
	I0925 10:53:31.307202   97187 command_runner.go:130] > Platform:         linux/amd64
	I0925 10:53:31.307212   97187 command_runner.go:130] > Linkmode:         dynamic
	I0925 10:53:31.307228   97187 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0925 10:53:31.307239   97187 command_runner.go:130] > SeccompEnabled:   true
	I0925 10:53:31.307249   97187 command_runner.go:130] > AppArmorEnabled:  false
	I0925 10:53:31.310393   97187 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I0925 10:53:31.311984   97187 out.go:177]   - env NO_PROXY=192.168.58.2
	I0925 10:53:31.313442   97187 cli_runner.go:164] Run: docker network inspect multinode-529126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0925 10:53:31.328929   97187 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0925 10:53:31.332361   97187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 10:53:31.341789   97187 certs.go:56] Setting up /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126 for IP: 192.168.58.3
	I0925 10:53:31.341819   97187 certs.go:190] acquiring lock for shared ca certs: {Name:mk1dc4321044392bda6d0b04ee5f4e5cca314d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 10:53:31.341932   97187 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key
	I0925 10:53:31.341966   97187 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key
	I0925 10:53:31.341975   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0925 10:53:31.341992   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0925 10:53:31.342004   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0925 10:53:31.342016   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0925 10:53:31.342060   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem (1338 bytes)
	W0925 10:53:31.342085   97187 certs.go:433] ignoring /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516_empty.pem, impossibly tiny 0 bytes
	I0925 10:53:31.342095   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem (1675 bytes)
	I0925 10:53:31.342114   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem (1078 bytes)
	I0925 10:53:31.342136   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem (1123 bytes)
	I0925 10:53:31.342163   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem (1675 bytes)
	I0925 10:53:31.342199   97187 certs.go:437] found cert: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem (1708 bytes)
	I0925 10:53:31.342226   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:53:31.342238   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem -> /usr/share/ca-certificates/12516.pem
	I0925 10:53:31.342251   97187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> /usr/share/ca-certificates/125162.pem
	I0925 10:53:31.342617   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 10:53:31.362524   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0925 10:53:31.382385   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 10:53:31.402117   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0925 10:53:31.423118   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 10:53:31.443554   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/12516.pem --> /usr/share/ca-certificates/12516.pem (1338 bytes)
	I0925 10:53:31.463615   97187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem --> /usr/share/ca-certificates/125162.pem (1708 bytes)
	I0925 10:53:31.484477   97187 ssh_runner.go:195] Run: openssl version
	I0925 10:53:31.489081   97187 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0925 10:53:31.489222   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 10:53:31.497429   97187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:53:31.500310   97187 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:53:31.500358   97187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:53:31.500402   97187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 10:53:31.506213   97187 command_runner.go:130] > b5213941
	I0925 10:53:31.506393   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 10:53:31.514124   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12516.pem && ln -fs /usr/share/ca-certificates/12516.pem /etc/ssl/certs/12516.pem"
	I0925 10:53:31.521758   97187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12516.pem
	I0925 10:53:31.524508   97187 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 25 10:39 /usr/share/ca-certificates/12516.pem
	I0925 10:53:31.524528   97187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:39 /usr/share/ca-certificates/12516.pem
	I0925 10:53:31.524554   97187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12516.pem
	I0925 10:53:31.530333   97187 command_runner.go:130] > 51391683
	I0925 10:53:31.530495   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12516.pem /etc/ssl/certs/51391683.0"
	I0925 10:53:31.538664   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125162.pem && ln -fs /usr/share/ca-certificates/125162.pem /etc/ssl/certs/125162.pem"
	I0925 10:53:31.547001   97187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125162.pem
	I0925 10:53:31.549938   97187 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 25 10:39 /usr/share/ca-certificates/125162.pem
	I0925 10:53:31.549963   97187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:39 /usr/share/ca-certificates/125162.pem
	I0925 10:53:31.549995   97187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125162.pem
	I0925 10:53:31.555602   97187 command_runner.go:130] > 3ec20f2e
	I0925 10:53:31.555759   97187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125162.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 10:53:31.563776   97187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 10:53:31.566657   97187 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 10:53:31.566687   97187 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0925 10:53:31.566754   97187 ssh_runner.go:195] Run: crio config
	I0925 10:53:31.600275   97187 command_runner.go:130] ! time="2023-09-25 10:53:31.599912167Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0925 10:53:31.600301   97187 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0925 10:53:31.606588   97187 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0925 10:53:31.606609   97187 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0925 10:53:31.606615   97187 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0925 10:53:31.606619   97187 command_runner.go:130] > #
	I0925 10:53:31.606628   97187 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0925 10:53:31.606639   97187 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0925 10:53:31.606650   97187 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0925 10:53:31.606668   97187 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0925 10:53:31.606675   97187 command_runner.go:130] > # reload'.
	I0925 10:53:31.606682   97187 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0925 10:53:31.606690   97187 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0925 10:53:31.606698   97187 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0925 10:53:31.606707   97187 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0925 10:53:31.606711   97187 command_runner.go:130] > [crio]
	I0925 10:53:31.606718   97187 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0925 10:53:31.606725   97187 command_runner.go:130] > # containers images, in this directory.
	I0925 10:53:31.606733   97187 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0925 10:53:31.606741   97187 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0925 10:53:31.606747   97187 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0925 10:53:31.606753   97187 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0925 10:53:31.606762   97187 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0925 10:53:31.606766   97187 command_runner.go:130] > # storage_driver = "vfs"
	I0925 10:53:31.606775   97187 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0925 10:53:31.606781   97187 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0925 10:53:31.606787   97187 command_runner.go:130] > # storage_option = [
	I0925 10:53:31.606790   97187 command_runner.go:130] > # ]
	I0925 10:53:31.606798   97187 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0925 10:53:31.606807   97187 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0925 10:53:31.606815   97187 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0925 10:53:31.606821   97187 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0925 10:53:31.606829   97187 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0925 10:53:31.606834   97187 command_runner.go:130] > # always happen on a node reboot
	I0925 10:53:31.606840   97187 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0925 10:53:31.606846   97187 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0925 10:53:31.606854   97187 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0925 10:53:31.606864   97187 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0925 10:53:31.606872   97187 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0925 10:53:31.606879   97187 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0925 10:53:31.606889   97187 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0925 10:53:31.606896   97187 command_runner.go:130] > # internal_wipe = true
	I0925 10:53:31.606901   97187 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0925 10:53:31.606910   97187 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0925 10:53:31.606918   97187 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0925 10:53:31.606923   97187 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0925 10:53:31.606929   97187 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0925 10:53:31.606933   97187 command_runner.go:130] > [crio.api]
	I0925 10:53:31.606939   97187 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0925 10:53:31.606946   97187 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0925 10:53:31.606952   97187 command_runner.go:130] > # IP address on which the stream server will listen.
	I0925 10:53:31.606959   97187 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0925 10:53:31.606965   97187 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0925 10:53:31.606973   97187 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0925 10:53:31.606977   97187 command_runner.go:130] > # stream_port = "0"
	I0925 10:53:31.606984   97187 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0925 10:53:31.606988   97187 command_runner.go:130] > # stream_enable_tls = false
	I0925 10:53:31.606995   97187 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0925 10:53:31.607001   97187 command_runner.go:130] > # stream_idle_timeout = ""
	I0925 10:53:31.607007   97187 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0925 10:53:31.607016   97187 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0925 10:53:31.607019   97187 command_runner.go:130] > # minutes.
	I0925 10:53:31.607024   97187 command_runner.go:130] > # stream_tls_cert = ""
	I0925 10:53:31.607032   97187 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0925 10:53:31.607038   97187 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0925 10:53:31.607045   97187 command_runner.go:130] > # stream_tls_key = ""
	I0925 10:53:31.607050   97187 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0925 10:53:31.607066   97187 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0925 10:53:31.607075   97187 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0925 10:53:31.607079   97187 command_runner.go:130] > # stream_tls_ca = ""
	I0925 10:53:31.607086   97187 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0925 10:53:31.607093   97187 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0925 10:53:31.607100   97187 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0925 10:53:31.607107   97187 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0925 10:53:31.607125   97187 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0925 10:53:31.607133   97187 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0925 10:53:31.607137   97187 command_runner.go:130] > [crio.runtime]
	I0925 10:53:31.607146   97187 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0925 10:53:31.607151   97187 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0925 10:53:31.607158   97187 command_runner.go:130] > # "nofile=1024:2048"
	I0925 10:53:31.607164   97187 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0925 10:53:31.607170   97187 command_runner.go:130] > # default_ulimits = [
	I0925 10:53:31.607174   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607182   97187 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0925 10:53:31.607187   97187 command_runner.go:130] > # no_pivot = false
	I0925 10:53:31.607198   97187 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0925 10:53:31.607204   97187 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0925 10:53:31.607209   97187 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0925 10:53:31.607215   97187 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0925 10:53:31.607222   97187 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0925 10:53:31.607229   97187 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0925 10:53:31.607235   97187 command_runner.go:130] > # conmon = ""
	I0925 10:53:31.607239   97187 command_runner.go:130] > # Cgroup setting for conmon
	I0925 10:53:31.607248   97187 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0925 10:53:31.607253   97187 command_runner.go:130] > conmon_cgroup = "pod"
	I0925 10:53:31.607262   97187 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0925 10:53:31.607269   97187 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0925 10:53:31.607275   97187 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0925 10:53:31.607282   97187 command_runner.go:130] > # conmon_env = [
	I0925 10:53:31.607285   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607291   97187 command_runner.go:130] > # Additional environment variables to set for all the
	I0925 10:53:31.607298   97187 command_runner.go:130] > # containers. These are overridden if set in the
	I0925 10:53:31.607304   97187 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0925 10:53:31.607313   97187 command_runner.go:130] > # default_env = [
	I0925 10:53:31.607316   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607322   97187 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0925 10:53:31.607326   97187 command_runner.go:130] > # selinux = false
	I0925 10:53:31.607333   97187 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0925 10:53:31.607342   97187 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0925 10:53:31.607347   97187 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0925 10:53:31.607355   97187 command_runner.go:130] > # seccomp_profile = ""
	I0925 10:53:31.607360   97187 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0925 10:53:31.607372   97187 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0925 10:53:31.607378   97187 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0925 10:53:31.607383   97187 command_runner.go:130] > # which might increase security.
	I0925 10:53:31.607390   97187 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0925 10:53:31.607398   97187 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0925 10:53:31.607407   97187 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0925 10:53:31.607413   97187 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0925 10:53:31.607421   97187 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0925 10:53:31.607427   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:53:31.607436   97187 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0925 10:53:31.607444   97187 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0925 10:53:31.607449   97187 command_runner.go:130] > # the cgroup blockio controller.
	I0925 10:53:31.607454   97187 command_runner.go:130] > # blockio_config_file = ""
	I0925 10:53:31.607461   97187 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0925 10:53:31.607465   97187 command_runner.go:130] > # irqbalance daemon.
	I0925 10:53:31.607473   97187 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0925 10:53:31.607479   97187 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0925 10:53:31.607486   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:53:31.607491   97187 command_runner.go:130] > # rdt_config_file = ""
	I0925 10:53:31.607497   97187 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0925 10:53:31.607501   97187 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0925 10:53:31.607509   97187 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0925 10:53:31.607515   97187 command_runner.go:130] > # separate_pull_cgroup = ""
	I0925 10:53:31.607521   97187 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0925 10:53:31.607530   97187 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0925 10:53:31.607534   97187 command_runner.go:130] > # will be added.
	I0925 10:53:31.607540   97187 command_runner.go:130] > # default_capabilities = [
	I0925 10:53:31.607549   97187 command_runner.go:130] > # 	"CHOWN",
	I0925 10:53:31.607559   97187 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0925 10:53:31.607565   97187 command_runner.go:130] > # 	"FSETID",
	I0925 10:53:31.607574   97187 command_runner.go:130] > # 	"FOWNER",
	I0925 10:53:31.607580   97187 command_runner.go:130] > # 	"SETGID",
	I0925 10:53:31.607589   97187 command_runner.go:130] > # 	"SETUID",
	I0925 10:53:31.607596   97187 command_runner.go:130] > # 	"SETPCAP",
	I0925 10:53:31.607603   97187 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0925 10:53:31.607607   97187 command_runner.go:130] > # 	"KILL",
	I0925 10:53:31.607613   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607620   97187 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0925 10:53:31.607630   97187 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0925 10:53:31.607635   97187 command_runner.go:130] > # add_inheritable_capabilities = true
	I0925 10:53:31.607641   97187 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0925 10:53:31.607649   97187 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0925 10:53:31.607654   97187 command_runner.go:130] > # default_sysctls = [
	I0925 10:53:31.607660   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607665   97187 command_runner.go:130] > # List of devices on the host that a
	I0925 10:53:31.607676   97187 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0925 10:53:31.607683   97187 command_runner.go:130] > # allowed_devices = [
	I0925 10:53:31.607687   97187 command_runner.go:130] > # 	"/dev/fuse",
	I0925 10:53:31.607692   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607697   97187 command_runner.go:130] > # List of additional devices. specified as
	I0925 10:53:31.607719   97187 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0925 10:53:31.607726   97187 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0925 10:53:31.607732   97187 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0925 10:53:31.607737   97187 command_runner.go:130] > # additional_devices = [
	I0925 10:53:31.607742   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607748   97187 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0925 10:53:31.607754   97187 command_runner.go:130] > # cdi_spec_dirs = [
	I0925 10:53:31.607758   97187 command_runner.go:130] > # 	"/etc/cdi",
	I0925 10:53:31.607765   97187 command_runner.go:130] > # 	"/var/run/cdi",
	I0925 10:53:31.607768   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607775   97187 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0925 10:53:31.607783   97187 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0925 10:53:31.607789   97187 command_runner.go:130] > # Defaults to false.
	I0925 10:53:31.607794   97187 command_runner.go:130] > # device_ownership_from_security_context = false
	I0925 10:53:31.607802   97187 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0925 10:53:31.607809   97187 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0925 10:53:31.607814   97187 command_runner.go:130] > # hooks_dir = [
	I0925 10:53:31.607819   97187 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0925 10:53:31.607823   97187 command_runner.go:130] > # ]
	I0925 10:53:31.607831   97187 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0925 10:53:31.607839   97187 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0925 10:53:31.607847   97187 command_runner.go:130] > # its default mounts from the following two files:
	I0925 10:53:31.607850   97187 command_runner.go:130] > #
	I0925 10:53:31.607859   97187 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0925 10:53:31.607865   97187 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0925 10:53:31.607882   97187 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0925 10:53:31.607888   97187 command_runner.go:130] > #
	I0925 10:53:31.607894   97187 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0925 10:53:31.607903   97187 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0925 10:53:31.607910   97187 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0925 10:53:31.607917   97187 command_runner.go:130] > #      only add mounts it finds in this file.
	I0925 10:53:31.607921   97187 command_runner.go:130] > #
	I0925 10:53:31.607926   97187 command_runner.go:130] > # default_mounts_file = ""
	I0925 10:53:31.607932   97187 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0925 10:53:31.607939   97187 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0925 10:53:31.607945   97187 command_runner.go:130] > # pids_limit = 0
	I0925 10:53:31.607951   97187 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0925 10:53:31.607959   97187 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0925 10:53:31.607968   97187 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0925 10:53:31.607975   97187 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0925 10:53:31.607982   97187 command_runner.go:130] > # log_size_max = -1
	I0925 10:53:31.607988   97187 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0925 10:53:31.607995   97187 command_runner.go:130] > # log_to_journald = false
	I0925 10:53:31.608001   97187 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0925 10:53:31.608009   97187 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0925 10:53:31.608014   97187 command_runner.go:130] > # Path to directory for container attach sockets.
	I0925 10:53:31.608019   97187 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0925 10:53:31.608024   97187 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0925 10:53:31.608029   97187 command_runner.go:130] > # bind_mount_prefix = ""
	I0925 10:53:31.608034   97187 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0925 10:53:31.608041   97187 command_runner.go:130] > # read_only = false
	I0925 10:53:31.608047   97187 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0925 10:53:31.608056   97187 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0925 10:53:31.608061   97187 command_runner.go:130] > # live configuration reload.
	I0925 10:53:31.608067   97187 command_runner.go:130] > # log_level = "info"
	I0925 10:53:31.608072   97187 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0925 10:53:31.608079   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:53:31.608083   97187 command_runner.go:130] > # log_filter = ""
	I0925 10:53:31.608089   97187 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0925 10:53:31.608097   97187 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0925 10:53:31.608101   97187 command_runner.go:130] > # separated by comma.
	I0925 10:53:31.608105   97187 command_runner.go:130] > # uid_mappings = ""
	I0925 10:53:31.608113   97187 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0925 10:53:31.608120   97187 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0925 10:53:31.608126   97187 command_runner.go:130] > # separated by comma.
	I0925 10:53:31.608130   97187 command_runner.go:130] > # gid_mappings = ""
	I0925 10:53:31.608139   97187 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0925 10:53:31.608146   97187 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0925 10:53:31.608154   97187 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0925 10:53:31.608159   97187 command_runner.go:130] > # minimum_mappable_uid = -1
	I0925 10:53:31.608167   97187 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0925 10:53:31.608173   97187 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0925 10:53:31.608181   97187 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0925 10:53:31.608185   97187 command_runner.go:130] > # minimum_mappable_gid = -1
	I0925 10:53:31.608191   97187 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0925 10:53:31.608199   97187 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0925 10:53:31.608205   97187 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0925 10:53:31.608212   97187 command_runner.go:130] > # ctr_stop_timeout = 30
	I0925 10:53:31.608218   97187 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0925 10:53:31.608228   97187 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0925 10:53:31.608235   97187 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0925 10:53:31.608240   97187 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0925 10:53:31.608246   97187 command_runner.go:130] > # drop_infra_ctr = true
	I0925 10:53:31.608252   97187 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0925 10:53:31.608260   97187 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0925 10:53:31.608267   97187 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0925 10:53:31.608273   97187 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0925 10:53:31.608279   97187 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0925 10:53:31.608286   97187 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0925 10:53:31.608290   97187 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0925 10:53:31.608299   97187 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0925 10:53:31.608303   97187 command_runner.go:130] > # pinns_path = ""
	I0925 10:53:31.608311   97187 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0925 10:53:31.608319   97187 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0925 10:53:31.608326   97187 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0925 10:53:31.608330   97187 command_runner.go:130] > # default_runtime = "runc"
	I0925 10:53:31.608337   97187 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0925 10:53:31.608344   97187 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0925 10:53:31.608355   97187 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0925 10:53:31.608361   97187 command_runner.go:130] > # creation as a file is not desired either.
	I0925 10:53:31.608376   97187 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0925 10:53:31.608382   97187 command_runner.go:130] > # the hostname is being managed dynamically.
	I0925 10:53:31.608386   97187 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0925 10:53:31.608400   97187 command_runner.go:130] > # ]
	I0925 10:53:31.608407   97187 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0925 10:53:31.608416   97187 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0925 10:53:31.608422   97187 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0925 10:53:31.608431   97187 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0925 10:53:31.608434   97187 command_runner.go:130] > #
	I0925 10:53:31.608439   97187 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0925 10:53:31.608444   97187 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0925 10:53:31.608448   97187 command_runner.go:130] > #  runtime_type = "oci"
	I0925 10:53:31.608455   97187 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0925 10:53:31.608460   97187 command_runner.go:130] > #  privileged_without_host_devices = false
	I0925 10:53:31.608467   97187 command_runner.go:130] > #  allowed_annotations = []
	I0925 10:53:31.608471   97187 command_runner.go:130] > # Where:
	I0925 10:53:31.608478   97187 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0925 10:53:31.608484   97187 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0925 10:53:31.608493   97187 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0925 10:53:31.608499   97187 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0925 10:53:31.608505   97187 command_runner.go:130] > #   in $PATH.
	I0925 10:53:31.608512   97187 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0925 10:53:31.608519   97187 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0925 10:53:31.608525   97187 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0925 10:53:31.608529   97187 command_runner.go:130] > #   state.
	I0925 10:53:31.608537   97187 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0925 10:53:31.608545   97187 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0925 10:53:31.608551   97187 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0925 10:53:31.608558   97187 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0925 10:53:31.608565   97187 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0925 10:53:31.608574   97187 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0925 10:53:31.608581   97187 command_runner.go:130] > #   The currently recognized values are:
	I0925 10:53:31.608588   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0925 10:53:31.608597   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0925 10:53:31.608603   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0925 10:53:31.608611   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0925 10:53:31.608619   97187 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0925 10:53:31.608628   97187 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0925 10:53:31.608650   97187 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0925 10:53:31.608663   97187 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0925 10:53:31.608672   97187 command_runner.go:130] > #   should be moved to the container's cgroup
	I0925 10:53:31.608677   97187 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0925 10:53:31.608684   97187 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0925 10:53:31.608689   97187 command_runner.go:130] > runtime_type = "oci"
	I0925 10:53:31.608696   97187 command_runner.go:130] > runtime_root = "/run/runc"
	I0925 10:53:31.608701   97187 command_runner.go:130] > runtime_config_path = ""
	I0925 10:53:31.608707   97187 command_runner.go:130] > monitor_path = ""
	I0925 10:53:31.608712   97187 command_runner.go:130] > monitor_cgroup = ""
	I0925 10:53:31.608718   97187 command_runner.go:130] > monitor_exec_cgroup = ""
	I0925 10:53:31.608793   97187 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0925 10:53:31.608805   97187 command_runner.go:130] > # running containers
	I0925 10:53:31.608810   97187 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0925 10:53:31.608816   97187 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0925 10:53:31.608822   97187 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0925 10:53:31.608831   97187 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0925 10:53:31.608836   97187 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0925 10:53:31.608843   97187 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0925 10:53:31.608848   97187 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0925 10:53:31.608855   97187 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0925 10:53:31.608860   97187 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0925 10:53:31.608868   97187 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0925 10:53:31.608874   97187 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0925 10:53:31.608882   97187 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0925 10:53:31.608888   97187 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0925 10:53:31.608898   97187 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0925 10:53:31.608908   97187 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0925 10:53:31.608914   97187 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0925 10:53:31.608926   97187 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0925 10:53:31.608936   97187 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0925 10:53:31.608944   97187 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0925 10:53:31.608951   97187 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0925 10:53:31.608957   97187 command_runner.go:130] > # Example:
	I0925 10:53:31.608962   97187 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0925 10:53:31.608970   97187 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0925 10:53:31.608975   97187 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0925 10:53:31.608983   97187 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0925 10:53:31.608988   97187 command_runner.go:130] > # cpuset = 0
	I0925 10:53:31.608993   97187 command_runner.go:130] > # cpushares = "0-1"
	I0925 10:53:31.608996   97187 command_runner.go:130] > # Where:
	I0925 10:53:31.609003   97187 command_runner.go:130] > # The workload name is workload-type.
	I0925 10:53:31.609009   97187 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0925 10:53:31.609018   97187 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0925 10:53:31.609024   97187 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0925 10:53:31.609034   97187 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0925 10:53:31.609042   97187 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0925 10:53:31.609046   97187 command_runner.go:130] > # 
	I0925 10:53:31.609055   97187 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0925 10:53:31.609060   97187 command_runner.go:130] > #
	I0925 10:53:31.609066   97187 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0925 10:53:31.609073   97187 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0925 10:53:31.609079   97187 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0925 10:53:31.609088   97187 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0925 10:53:31.609094   97187 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0925 10:53:31.609100   97187 command_runner.go:130] > [crio.image]
	I0925 10:53:31.609107   97187 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0925 10:53:31.609113   97187 command_runner.go:130] > # default_transport = "docker://"
	I0925 10:53:31.609119   97187 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0925 10:53:31.609128   97187 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0925 10:53:31.609135   97187 command_runner.go:130] > # global_auth_file = ""
	I0925 10:53:31.609140   97187 command_runner.go:130] > # The image used to instantiate infra containers.
	I0925 10:53:31.609148   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:53:31.609155   97187 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0925 10:53:31.609161   97187 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0925 10:53:31.609169   97187 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0925 10:53:31.609178   97187 command_runner.go:130] > # This option supports live configuration reload.
	I0925 10:53:31.609184   97187 command_runner.go:130] > # pause_image_auth_file = ""
	I0925 10:53:31.609190   97187 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0925 10:53:31.609198   97187 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0925 10:53:31.609207   97187 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0925 10:53:31.609215   97187 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0925 10:53:31.609222   97187 command_runner.go:130] > # pause_command = "/pause"
	I0925 10:53:31.609229   97187 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0925 10:53:31.609237   97187 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0925 10:53:31.609245   97187 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0925 10:53:31.609254   97187 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0925 10:53:31.609261   97187 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0925 10:53:31.609268   97187 command_runner.go:130] > # signature_policy = ""
	I0925 10:53:31.609279   97187 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0925 10:53:31.609288   97187 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0925 10:53:31.609295   97187 command_runner.go:130] > # changing them here.
	I0925 10:53:31.609299   97187 command_runner.go:130] > # insecure_registries = [
	I0925 10:53:31.609305   97187 command_runner.go:130] > # ]
	I0925 10:53:31.609311   97187 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0925 10:53:31.609319   97187 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0925 10:53:31.609323   97187 command_runner.go:130] > # image_volumes = "mkdir"
	I0925 10:53:31.609328   97187 command_runner.go:130] > # Temporary directory to use for storing big files
	I0925 10:53:31.609336   97187 command_runner.go:130] > # big_files_temporary_dir = ""
	I0925 10:53:31.609344   97187 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0925 10:53:31.609350   97187 command_runner.go:130] > # CNI plugins.
	I0925 10:53:31.609355   97187 command_runner.go:130] > [crio.network]
	I0925 10:53:31.609367   97187 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0925 10:53:31.609375   97187 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0925 10:53:31.609382   97187 command_runner.go:130] > # cni_default_network = ""
	I0925 10:53:31.609391   97187 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0925 10:53:31.609398   97187 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0925 10:53:31.609404   97187 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0925 10:53:31.609410   97187 command_runner.go:130] > # plugin_dirs = [
	I0925 10:53:31.609414   97187 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0925 10:53:31.609420   97187 command_runner.go:130] > # ]
	I0925 10:53:31.609427   97187 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0925 10:53:31.609433   97187 command_runner.go:130] > [crio.metrics]
	I0925 10:53:31.609439   97187 command_runner.go:130] > # Globally enable or disable metrics support.
	I0925 10:53:31.609445   97187 command_runner.go:130] > # enable_metrics = false
	I0925 10:53:31.609450   97187 command_runner.go:130] > # Specify enabled metrics collectors.
	I0925 10:53:31.609457   97187 command_runner.go:130] > # Per default all metrics are enabled.
	I0925 10:53:31.609467   97187 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0925 10:53:31.609475   97187 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0925 10:53:31.609484   97187 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0925 10:53:31.609488   97187 command_runner.go:130] > # metrics_collectors = [
	I0925 10:53:31.609494   97187 command_runner.go:130] > # 	"operations",
	I0925 10:53:31.609499   97187 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0925 10:53:31.609506   97187 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0925 10:53:31.609510   97187 command_runner.go:130] > # 	"operations_errors",
	I0925 10:53:31.609517   97187 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0925 10:53:31.609521   97187 command_runner.go:130] > # 	"image_pulls_by_name",
	I0925 10:53:31.609528   97187 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0925 10:53:31.609532   97187 command_runner.go:130] > # 	"image_pulls_failures",
	I0925 10:53:31.609539   97187 command_runner.go:130] > # 	"image_pulls_successes",
	I0925 10:53:31.609543   97187 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0925 10:53:31.609550   97187 command_runner.go:130] > # 	"image_layer_reuse",
	I0925 10:53:31.609554   97187 command_runner.go:130] > # 	"containers_oom_total",
	I0925 10:53:31.609560   97187 command_runner.go:130] > # 	"containers_oom",
	I0925 10:53:31.609565   97187 command_runner.go:130] > # 	"processes_defunct",
	I0925 10:53:31.609571   97187 command_runner.go:130] > # 	"operations_total",
	I0925 10:53:31.609575   97187 command_runner.go:130] > # 	"operations_latency_seconds",
	I0925 10:53:31.609583   97187 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0925 10:53:31.609590   97187 command_runner.go:130] > # 	"operations_errors_total",
	I0925 10:53:31.609594   97187 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0925 10:53:31.609601   97187 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0925 10:53:31.609606   97187 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0925 10:53:31.609612   97187 command_runner.go:130] > # 	"image_pulls_success_total",
	I0925 10:53:31.609617   97187 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0925 10:53:31.609624   97187 command_runner.go:130] > # 	"containers_oom_count_total",
	I0925 10:53:31.609627   97187 command_runner.go:130] > # ]
	I0925 10:53:31.609635   97187 command_runner.go:130] > # The port on which the metrics server will listen.
	I0925 10:53:31.609642   97187 command_runner.go:130] > # metrics_port = 9090
	I0925 10:53:31.609647   97187 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0925 10:53:31.609653   97187 command_runner.go:130] > # metrics_socket = ""
	I0925 10:53:31.609659   97187 command_runner.go:130] > # The certificate for the secure metrics server.
	I0925 10:53:31.609666   97187 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0925 10:53:31.609675   97187 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0925 10:53:31.609681   97187 command_runner.go:130] > # certificate on any modification event.
	I0925 10:53:31.609686   97187 command_runner.go:130] > # metrics_cert = ""
	I0925 10:53:31.609696   97187 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0925 10:53:31.609704   97187 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0925 10:53:31.609708   97187 command_runner.go:130] > # metrics_key = ""
	I0925 10:53:31.609716   97187 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0925 10:53:31.609722   97187 command_runner.go:130] > [crio.tracing]
	I0925 10:53:31.609728   97187 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0925 10:53:31.609734   97187 command_runner.go:130] > # enable_tracing = false
	I0925 10:53:31.609740   97187 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0925 10:53:31.609746   97187 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0925 10:53:31.609751   97187 command_runner.go:130] > # Number of samples to collect per million spans.
	I0925 10:53:31.609759   97187 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0925 10:53:31.609765   97187 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0925 10:53:31.609771   97187 command_runner.go:130] > [crio.stats]
	I0925 10:53:31.609777   97187 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0925 10:53:31.609784   97187 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0925 10:53:31.609791   97187 command_runner.go:130] > # stats_collection_period = 0
	I0925 10:53:31.609859   97187 cni.go:84] Creating CNI manager for ""
	I0925 10:53:31.609869   97187 cni.go:136] 2 nodes found, recommending kindnet
	I0925 10:53:31.609887   97187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 10:53:31.609908   97187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-529126 NodeName:multinode-529126-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 10:53:31.610019   97187 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-529126-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 10:53:31.610065   97187 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-529126-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-529126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 10:53:31.610111   97187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 10:53:31.617372   97187 command_runner.go:130] > kubeadm
	I0925 10:53:31.617391   97187 command_runner.go:130] > kubectl
	I0925 10:53:31.617397   97187 command_runner.go:130] > kubelet
	I0925 10:53:31.617954   97187 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 10:53:31.618017   97187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0925 10:53:31.625899   97187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0925 10:53:31.640833   97187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 10:53:31.655777   97187 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0925 10:53:31.658779   97187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 10:53:31.667993   97187 host.go:66] Checking if "multinode-529126" exists ...
	I0925 10:53:31.668207   97187 config.go:182] Loaded profile config "multinode-529126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:53:31.668225   97187 start.go:304] JoinCluster: &{Name:multinode-529126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-529126 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:53:31.668327   97187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0925 10:53:31.668375   97187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:53:31.683837   97187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:53:31.828118   97187 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7o8tyr.bbp4wm3r3cixxygs --discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 
	I0925 10:53:31.832038   97187 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0925 10:53:31.832078   97187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o8tyr.bbp4wm3r3cixxygs --discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-529126-m02"
	I0925 10:53:31.866033   97187 command_runner.go:130] ! W0925 10:53:31.865555    1106 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0925 10:53:31.893666   97187 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1042-gcp\n", err: exit status 1
	I0925 10:53:31.956402   97187 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0925 10:53:34.081550   97187 command_runner.go:130] > [preflight] Running pre-flight checks
	I0925 10:53:34.081576   97187 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0925 10:53:34.081583   97187 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1042-gcp
	I0925 10:53:34.081587   97187 command_runner.go:130] > OS: Linux
	I0925 10:53:34.081592   97187 command_runner.go:130] > CGROUPS_CPU: enabled
	I0925 10:53:34.081598   97187 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0925 10:53:34.081602   97187 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0925 10:53:34.081607   97187 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0925 10:53:34.081612   97187 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0925 10:53:34.081617   97187 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0925 10:53:34.081623   97187 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0925 10:53:34.081631   97187 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0925 10:53:34.081636   97187 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0925 10:53:34.081644   97187 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0925 10:53:34.081651   97187 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0925 10:53:34.081660   97187 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0925 10:53:34.081667   97187 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0925 10:53:34.081674   97187 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0925 10:53:34.081684   97187 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0925 10:53:34.081691   97187 command_runner.go:130] > This node has joined the cluster:
	I0925 10:53:34.081697   97187 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0925 10:53:34.081705   97187 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0925 10:53:34.081712   97187 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0925 10:53:34.081742   97187 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o8tyr.bbp4wm3r3cixxygs --discovery-token-ca-cert-hash sha256:1c3306aef1d6006cd71809f67dab403b34b4e2df13ef053b70f810d540f79657 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-529126-m02": (2.249639921s)
	I0925 10:53:34.081760   97187 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0925 10:53:34.248525   97187 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0925 10:53:34.248567   97187 start.go:306] JoinCluster complete in 2.580340733s
	I0925 10:53:34.248580   97187 cni.go:84] Creating CNI manager for ""
	I0925 10:53:34.248586   97187 cni.go:136] 2 nodes found, recommending kindnet
	I0925 10:53:34.248628   97187 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0925 10:53:34.251759   97187 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0925 10:53:34.251788   97187 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0925 10:53:34.251800   97187 command_runner.go:130] > Device: 37h/55d	Inode: 544061      Links: 1
	I0925 10:53:34.251809   97187 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0925 10:53:34.251820   97187 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0925 10:53:34.251832   97187 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0925 10:53:34.251850   97187 command_runner.go:130] > Change: 2023-09-25 10:33:47.107124260 +0000
	I0925 10:53:34.251861   97187 command_runner.go:130] >  Birth: 2023-09-25 10:33:47.087122342 +0000
	I0925 10:53:34.251939   97187 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0925 10:53:34.251953   97187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0925 10:53:34.267023   97187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0925 10:53:34.474218   97187 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0925 10:53:34.477468   97187 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0925 10:53:34.479600   97187 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0925 10:53:34.489864   97187 command_runner.go:130] > daemonset.apps/kindnet configured
	I0925 10:53:34.493662   97187 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:53:34.494086   97187 kapi.go:59] client config for multinode-529126: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.crt", KeyFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.key", CAFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf6d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 10:53:34.494368   97187 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0925 10:53:34.494381   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:34.494389   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:34.494394   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:34.496192   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:34.496216   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:34.496226   97187 round_trippers.go:580]     Content-Length: 291
	I0925 10:53:34.496235   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:34 GMT
	I0925 10:53:34.496247   97187 round_trippers.go:580]     Audit-Id: c9451005-f9d7-43d7-99a2-0784d17c4573
	I0925 10:53:34.496257   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:34.496266   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:34.496271   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:34.496279   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:34.496313   97187 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3fc7d046-e1e7-4b20-9a74-1e7aa1ebad8e","resourceVersion":"446","creationTimestamp":"2023-09-25T10:52:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0925 10:53:34.496400   97187 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-529126" context rescaled to 1 replicas
	I0925 10:53:34.496428   97187 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0925 10:53:34.498994   97187 out.go:177] * Verifying Kubernetes components...
	I0925 10:53:34.500494   97187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:53:34.510820   97187 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:53:34.511057   97187 kapi.go:59] client config for multinode-529126: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.crt", KeyFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/profiles/multinode-529126/client.key", CAFile:"/home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf6d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0925 10:53:34.511309   97187 node_ready.go:35] waiting up to 6m0s for node "multinode-529126-m02" to be "Ready" ...
	I0925 10:53:34.511379   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126-m02
	I0925 10:53:34.511389   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:34.511400   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:34.511410   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:34.513512   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:34.513529   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:34.513549   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:34.513557   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:34.513564   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:34.513572   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:34.513583   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:34 GMT
	I0925 10:53:34.513599   97187 round_trippers.go:580]     Audit-Id: 67c1d617-be34-46cc-9959-b4b6db25e60b
	I0925 10:53:34.513723   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126-m02","uid":"09e0539f-1c12-46e1-856e-5b48118c21e7","resourceVersion":"482","creationTimestamp":"2023-09-25T10:53:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:53:33Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5101 chars]
	I0925 10:53:34.514097   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126-m02
	I0925 10:53:34.514111   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:34.514121   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:34.514129   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:34.515853   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:34.515867   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:34.515877   97187 round_trippers.go:580]     Audit-Id: 051217ee-60c6-4760-bf7f-6959f9a17b1c
	I0925 10:53:34.515885   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:34.515892   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:34.515900   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:34.515914   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:34.515928   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:34 GMT
	I0925 10:53:34.516004   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126-m02","uid":"09e0539f-1c12-46e1-856e-5b48118c21e7","resourceVersion":"482","creationTimestamp":"2023-09-25T10:53:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:53:33Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5101 chars]
	I0925 10:53:35.017026   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126-m02
	I0925 10:53:35.017052   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.017060   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.017066   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.019328   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:35.019350   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.019356   97187 round_trippers.go:580]     Audit-Id: cdcbb946-fc52-47e6-9fac-45069242a7e1
	I0925 10:53:35.019362   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.019368   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.019378   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.019391   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.019404   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.019536   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126-m02","uid":"09e0539f-1c12-46e1-856e-5b48118c21e7","resourceVersion":"494","creationTimestamp":"2023-09-25T10:53:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5210 chars]
	I0925 10:53:35.517369   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126-m02
	I0925 10:53:35.517389   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.517413   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.517420   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.519648   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:35.519670   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.519680   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.519688   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.519695   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.519702   97187 round_trippers.go:580]     Audit-Id: b5605964-031f-434f-81bb-0501bcee90c6
	I0925 10:53:35.519711   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.519723   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.519920   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126-m02","uid":"09e0539f-1c12-46e1-856e-5b48118c21e7","resourceVersion":"497","creationTimestamp":"2023-09-25T10:53:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5296 chars]
	I0925 10:53:35.520300   97187 node_ready.go:49] node "multinode-529126-m02" has status "Ready":"True"
	I0925 10:53:35.520318   97187 node_ready.go:38] duration metric: took 1.008993183s waiting for node "multinode-529126-m02" to be "Ready" ...
	I0925 10:53:35.520330   97187 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 10:53:35.520409   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0925 10:53:35.520421   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.520432   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.520442   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.523268   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:35.523294   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.523305   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.523314   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.523323   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.523330   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.523339   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.523344   97187 round_trippers.go:580]     Audit-Id: 400e77e5-2163-4c42-815e-19fd74176b87
	I0925 10:53:35.523861   97187 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"501"},"items":[{"metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"442","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0925 10:53:35.525961   97187 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bl6dx" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.526038   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-bl6dx
	I0925 10:53:35.526048   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.526059   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.526069   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.527895   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:35.527914   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.527923   97187 round_trippers.go:580]     Audit-Id: 21b843fb-8871-4776-b667-b0ffbe467aa0
	I0925 10:53:35.527932   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.527939   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.527947   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.527955   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.527966   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.528075   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-bl6dx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274","resourceVersion":"442","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"17c53e32-79cf-483d-acaf-d20cd52d9012","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"17c53e32-79cf-483d-acaf-d20cd52d9012\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0925 10:53:35.528494   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:35.528536   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.528554   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.528563   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.530423   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:35.530438   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.530447   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.530456   97187 round_trippers.go:580]     Audit-Id: 7bcef012-54e4-407b-b3ac-0afb43bc90e9
	I0925 10:53:35.530465   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.530474   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.530481   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.530487   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.530621   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:35.530899   97187 pod_ready.go:92] pod "coredns-5dd5756b68-bl6dx" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:35.530911   97187 pod_ready.go:81] duration metric: took 4.932332ms waiting for pod "coredns-5dd5756b68-bl6dx" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.530919   97187 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.530965   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-529126
	I0925 10:53:35.530972   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.530979   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.530985   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.532667   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:35.532685   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.532695   97187 round_trippers.go:580]     Audit-Id: 8dd450fe-ea34-420f-b297-564f1150e7a2
	I0925 10:53:35.532704   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.532713   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.532725   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.532735   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.532740   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.532847   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-529126","namespace":"kube-system","uid":"183f855c-8718-4c7f-a90c-5491729da613","resourceVersion":"352","creationTimestamp":"2023-09-25T10:52:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"7310fa126167d348c6d813d092a2c83e","kubernetes.io/config.mirror":"7310fa126167d348c6d813d092a2c83e","kubernetes.io/config.seen":"2023-09-25T10:52:32.952572508Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0925 10:53:35.533287   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:35.533302   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.533313   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.533324   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.534846   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:35.534865   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.534875   97187 round_trippers.go:580]     Audit-Id: ec9dc3cc-39c6-4cb1-ba80-6c97b64f928c
	I0925 10:53:35.534883   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.534894   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.534902   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.534914   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.534926   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.535061   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:35.535328   97187 pod_ready.go:92] pod "etcd-multinode-529126" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:35.535339   97187 pod_ready.go:81] duration metric: took 4.414573ms waiting for pod "etcd-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.535351   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.535394   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-529126
	I0925 10:53:35.535401   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.535407   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.535413   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.537026   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:35.537044   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.537054   97187 round_trippers.go:580]     Audit-Id: eb1895a0-246a-45d7-b2c4-86589613b0f8
	I0925 10:53:35.537063   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.537071   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.537083   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.537091   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.537100   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.537212   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-529126","namespace":"kube-system","uid":"19c42393-64d3-470e-9f21-aad8c233bf42","resourceVersion":"327","creationTimestamp":"2023-09-25T10:52:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ad3d553cc6255758ba9dea20bc3a62bd","kubernetes.io/config.mirror":"ad3d553cc6255758ba9dea20bc3a62bd","kubernetes.io/config.seen":"2023-09-25T10:52:32.952574240Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0925 10:53:35.537584   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:35.537596   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.537602   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.537608   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.539025   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:35.539043   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.539053   97187 round_trippers.go:580]     Audit-Id: 54a4b114-e15b-44ec-8e2f-d63588c963a7
	I0925 10:53:35.539061   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.539069   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.539081   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.539092   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.539104   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.539232   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:35.539513   97187 pod_ready.go:92] pod "kube-apiserver-multinode-529126" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:35.539526   97187 pod_ready.go:81] duration metric: took 4.167416ms waiting for pod "kube-apiserver-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.539533   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.539599   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-529126
	I0925 10:53:35.539609   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.539616   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.539621   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.541176   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:35.541189   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.541197   97187 round_trippers.go:580]     Audit-Id: df728a4e-fd3e-4ec4-9587-ca69d2a854cc
	I0925 10:53:35.541203   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.541208   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.541214   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.541226   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.541234   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.541364   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-529126","namespace":"kube-system","uid":"8091d853-28f2-45bf-924a-88a9809f836e","resourceVersion":"306","creationTimestamp":"2023-09-25T10:52:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f3d738b53ad837e74ec7a881fba0aa09","kubernetes.io/config.mirror":"f3d738b53ad837e74ec7a881fba0aa09","kubernetes.io/config.seen":"2023-09-25T10:52:32.952575789Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0925 10:53:35.541691   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:35.541699   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.541706   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.541712   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.543169   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:35.543184   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.543190   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.543195   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.543201   97187 round_trippers.go:580]     Audit-Id: 6f118b5f-d1a4-4a4c-9c83-a896d5f2968c
	I0925 10:53:35.543205   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.543211   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.543222   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.543324   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:35.543597   97187 pod_ready.go:92] pod "kube-controller-manager-multinode-529126" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:35.543611   97187 pod_ready.go:81] duration metric: took 4.070977ms waiting for pod "kube-controller-manager-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.543619   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bgjg6" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.718025   97187 request.go:629] Waited for 174.346108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgjg6
	I0925 10:53:35.718080   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bgjg6
	I0925 10:53:35.718086   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.718095   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.718104   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.720369   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:35.720386   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.720393   97187 round_trippers.go:580]     Audit-Id: 3c4899b3-6bcd-47ea-959c-e1e60610ecbb
	I0925 10:53:35.720399   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.720404   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.720411   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.720420   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.720428   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.720567   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bgjg6","generateName":"kube-proxy-","namespace":"kube-system","uid":"45eb7fe4-bec5-4e5a-9b0f-d501b9b319e5","resourceVersion":"498","creationTimestamp":"2023-09-25T10:53:33Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"efa2104a-efe3-45ad-b54a-2bd7d8d60a92","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"efa2104a-efe3-45ad-b54a-2bd7d8d60a92\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0925 10:53:35.918396   97187 request.go:629] Waited for 197.348142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-529126-m02
	I0925 10:53:35.918469   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126-m02
	I0925 10:53:35.918474   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:35.918481   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:35.918490   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:35.920725   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:35.920744   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:35.920750   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:35.920755   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:35.920763   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:35.920771   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:35 GMT
	I0925 10:53:35.920779   97187 round_trippers.go:580]     Audit-Id: 96d466d5-cd6c-4e6f-8702-388033173edf
	I0925 10:53:35.920791   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:35.920918   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126-m02","uid":"09e0539f-1c12-46e1-856e-5b48118c21e7","resourceVersion":"497","creationTimestamp":"2023-09-25T10:53:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5296 chars]
	I0925 10:53:35.921193   97187 pod_ready.go:92] pod "kube-proxy-bgjg6" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:35.921205   97187 pod_ready.go:81] duration metric: took 377.578766ms waiting for pod "kube-proxy-bgjg6" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:35.921214   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wlsv6" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:36.117566   97187 request.go:629] Waited for 196.28442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlsv6
	I0925 10:53:36.117629   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wlsv6
	I0925 10:53:36.117639   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:36.117647   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:36.117653   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:36.119888   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:36.119906   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:36.119912   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:36.119917   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:36.119923   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:36.119932   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:36 GMT
	I0925 10:53:36.119940   97187 round_trippers.go:580]     Audit-Id: 2f663711-b051-450a-ba16-df7074ae8071
	I0925 10:53:36.119953   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:36.120072   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wlsv6","generateName":"kube-proxy-","namespace":"kube-system","uid":"e04d98ce-ec4c-4cb4-8ae8-329b6240c025","resourceVersion":"408","creationTimestamp":"2023-09-25T10:52:45Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"efa2104a-efe3-45ad-b54a-2bd7d8d60a92","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"efa2104a-efe3-45ad-b54a-2bd7d8d60a92\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0925 10:53:36.317798   97187 request.go:629] Waited for 197.297312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:36.317866   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:36.317875   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:36.317882   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:36.317889   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:36.319947   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:36.319963   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:36.319970   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:36.319975   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:36.319983   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:36.319992   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:36.320000   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:36 GMT
	I0925 10:53:36.320009   97187 round_trippers.go:580]     Audit-Id: 09ef5ba7-6000-48f5-bdc1-10f8c08c7c9e
	I0925 10:53:36.320155   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:36.320469   97187 pod_ready.go:92] pod "kube-proxy-wlsv6" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:36.320481   97187 pod_ready.go:81] duration metric: took 399.260692ms waiting for pod "kube-proxy-wlsv6" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:36.320490   97187 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:36.517952   97187 request.go:629] Waited for 197.382691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-529126
	I0925 10:53:36.518005   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-529126
	I0925 10:53:36.518010   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:36.518017   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:36.518024   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:36.520287   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:36.520305   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:36.520311   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:36 GMT
	I0925 10:53:36.520317   97187 round_trippers.go:580]     Audit-Id: 91ab533a-ec4b-4d7d-8063-5b6a12b181e1
	I0925 10:53:36.520323   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:36.520334   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:36.520346   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:36.520358   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:36.520471   97187 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-529126","namespace":"kube-system","uid":"36d25961-075e-4692-8ac5-bc14a734e7e0","resourceVersion":"319","creationTimestamp":"2023-09-25T10:52:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef4bf696b4d00407d5ead8e1c16c7583","kubernetes.io/config.mirror":"ef4bf696b4d00407d5ead8e1c16c7583","kubernetes.io/config.seen":"2023-09-25T10:52:32.952567913Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-25T10:52:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0925 10:53:36.718186   97187 request.go:629] Waited for 197.347415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:36.718246   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-529126
	I0925 10:53:36.718251   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:36.718259   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:36.718265   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:36.720281   97187 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0925 10:53:36.720305   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:36.720314   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:36.720323   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:36 GMT
	I0925 10:53:36.720331   97187 round_trippers.go:580]     Audit-Id: 4635112f-2421-492f-a21d-add10378e32d
	I0925 10:53:36.720340   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:36.720349   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:36.720358   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:36.720455   97187 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-25T10:52:30Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0925 10:53:36.720782   97187 pod_ready.go:92] pod "kube-scheduler-multinode-529126" in "kube-system" namespace has status "Ready":"True"
	I0925 10:53:36.720795   97187 pod_ready.go:81] duration metric: took 400.29929ms waiting for pod "kube-scheduler-multinode-529126" in "kube-system" namespace to be "Ready" ...
	I0925 10:53:36.720804   97187 pod_ready.go:38] duration metric: took 1.200456995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 10:53:36.720819   97187 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 10:53:36.720872   97187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:53:36.730860   97187 system_svc.go:56] duration metric: took 10.03407ms WaitForService to wait for kubelet.
	I0925 10:53:36.730880   97187 kubeadm.go:581] duration metric: took 2.234429751s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 10:53:36.730902   97187 node_conditions.go:102] verifying NodePressure condition ...
	I0925 10:53:36.918305   97187 request.go:629] Waited for 187.332507ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0925 10:53:36.918371   97187 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0925 10:53:36.918377   97187 round_trippers.go:469] Request Headers:
	I0925 10:53:36.918384   97187 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0925 10:53:36.918393   97187 round_trippers.go:473]     Accept: application/json, */*
	I0925 10:53:36.920691   97187 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0925 10:53:36.920709   97187 round_trippers.go:577] Response Headers:
	I0925 10:53:36.920716   97187 round_trippers.go:580]     Audit-Id: 67ea75ba-9d84-47f2-8964-c2a042254538
	I0925 10:53:36.920722   97187 round_trippers.go:580]     Cache-Control: no-cache, private
	I0925 10:53:36.920729   97187 round_trippers.go:580]     Content-Type: application/json
	I0925 10:53:36.920744   97187 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f7b1db16-36f9-494e-be95-e16fa1d7966b
	I0925 10:53:36.920756   97187 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af4ea79d-75a1-4db9-b3e5-59cffb64a41e
	I0925 10:53:36.920768   97187 round_trippers.go:580]     Date: Mon, 25 Sep 2023 10:53:36 GMT
	I0925 10:53:36.921000   97187 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"502"},"items":[{"metadata":{"name":"multinode-529126","uid":"fa3b93e2-a292-4a24-9224-0271e24a6c40","resourceVersion":"423","creationTimestamp":"2023-09-25T10:52:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-529126","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1bf6c3d5317028f348e55ea19d261973a6487d3c","minikube.k8s.io/name":"multinode-529126","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_25T10_52_33_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I0925 10:53:36.921475   97187 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0925 10:53:36.921487   97187 node_conditions.go:123] node cpu capacity is 8
	I0925 10:53:36.921496   97187 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0925 10:53:36.921499   97187 node_conditions.go:123] node cpu capacity is 8
	I0925 10:53:36.921504   97187 node_conditions.go:105] duration metric: took 190.59713ms to run NodePressure ...
	I0925 10:53:36.921515   97187 start.go:228] waiting for startup goroutines ...
	I0925 10:53:36.921539   97187 start.go:242] writing updated cluster config ...
	I0925 10:53:36.921795   97187 ssh_runner.go:195] Run: rm -f paused
	I0925 10:53:36.966288   97187 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0925 10:53:36.969342   97187 out.go:177] * Done! kubectl is now configured to use "multinode-529126" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 25 10:53:17 multinode-529126 crio[958]: time="2023-09-25 10:53:17.440820239Z" level=info msg="Created container 009676d112661e7c3d3aabc91c6c553459b188d89f6e23d1fb1f9867ee2ad533: kube-system/storage-provisioner/storage-provisioner" id=bfa87b48-892a-4472-a548-7cf00d83d8ae name=/runtime.v1.RuntimeService/CreateContainer
	Sep 25 10:53:17 multinode-529126 crio[958]: time="2023-09-25 10:53:17.440907912Z" level=info msg="Starting container: e2aea337ff1dfe40627407fde2c52fc79f11b99acfc0efba5b6c613e41a00a27" id=c651ddad-0e7d-474d-a177-c412ff72fdaf name=/runtime.v1.RuntimeService/StartContainer
	Sep 25 10:53:17 multinode-529126 crio[958]: time="2023-09-25 10:53:17.441175051Z" level=info msg="Starting container: 009676d112661e7c3d3aabc91c6c553459b188d89f6e23d1fb1f9867ee2ad533" id=de0de702-c501-4795-aff2-96731c7cba0f name=/runtime.v1.RuntimeService/StartContainer
	Sep 25 10:53:17 multinode-529126 crio[958]: time="2023-09-25 10:53:17.451793615Z" level=info msg="Started container" PID=2333 containerID=e2aea337ff1dfe40627407fde2c52fc79f11b99acfc0efba5b6c613e41a00a27 description=kube-system/coredns-5dd5756b68-bl6dx/coredns id=c651ddad-0e7d-474d-a177-c412ff72fdaf name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9035236a9cd083a3f96264e34fbdcba1cb45b1797bbdca06e6c2c8fe4f22eb8
	Sep 25 10:53:17 multinode-529126 crio[958]: time="2023-09-25 10:53:17.451816203Z" level=info msg="Started container" PID=2332 containerID=009676d112661e7c3d3aabc91c6c553459b188d89f6e23d1fb1f9867ee2ad533 description=kube-system/storage-provisioner/storage-provisioner id=de0de702-c501-4795-aff2-96731c7cba0f name=/runtime.v1.RuntimeService/StartContainer sandboxID=603b00a1fe02749e8f602e22d322641f93bbc0ba73babdf55be0695c5cfc533e
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.919340652Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-jnhqs/POD" id=e43b82d7-1ccc-4581-ba82-6ed6d544078d name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.919397438Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.933647217Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-jnhqs Namespace:default ID:971fa6cbc5b5ec86ecb03099680af50dbb123a9560d5bfe6d9fb9649d45d4bb8 UID:df316e95-c449-4069-a71e-518e7338860e NetNS:/var/run/netns/c17d8423-22c5-4627-b158-7576144fb95b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.933677091Z" level=info msg="Adding pod default_busybox-5bc68d56bd-jnhqs to CNI network \"kindnet\" (type=ptp)"
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.941770907Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-jnhqs Namespace:default ID:971fa6cbc5b5ec86ecb03099680af50dbb123a9560d5bfe6d9fb9649d45d4bb8 UID:df316e95-c449-4069-a71e-518e7338860e NetNS:/var/run/netns/c17d8423-22c5-4627-b158-7576144fb95b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.941876268Z" level=info msg="Checking pod default_busybox-5bc68d56bd-jnhqs for CNI network kindnet (type=ptp)"
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.959643220Z" level=info msg="Ran pod sandbox 971fa6cbc5b5ec86ecb03099680af50dbb123a9560d5bfe6d9fb9649d45d4bb8 with infra container: default/busybox-5bc68d56bd-jnhqs/POD" id=e43b82d7-1ccc-4581-ba82-6ed6d544078d name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.960608984Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=ae4c2ec6-7249-43ca-af1e-5276dd644061 name=/runtime.v1.ImageService/ImageStatus
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.960859125Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=ae4c2ec6-7249-43ca-af1e-5276dd644061 name=/runtime.v1.ImageService/ImageStatus
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.961576303Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=2b14bd56-2b87-459c-8587-968016f72a72 name=/runtime.v1.ImageService/PullImage
	Sep 25 10:53:37 multinode-529126 crio[958]: time="2023-09-25 10:53:37.965970525Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.154598445Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.575219942Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=2b14bd56-2b87-459c-8587-968016f72a72 name=/runtime.v1.ImageService/PullImage
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.576146132Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=8c85c68c-b400-4704-aae6-5054684dffab name=/runtime.v1.ImageService/ImageStatus
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.576819952Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8c85c68c-b400-4704-aae6-5054684dffab name=/runtime.v1.ImageService/ImageStatus
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.577687251Z" level=info msg="Creating container: default/busybox-5bc68d56bd-jnhqs/busybox" id=5dfbdfa5-e337-4133-8ae9-30a20f31f042 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.577780050Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.663664065Z" level=info msg="Created container c0505a8661cdc1a071cd92175f57f3d72e119db5904b81e0776ae009c543fad8: default/busybox-5bc68d56bd-jnhqs/busybox" id=5dfbdfa5-e337-4133-8ae9-30a20f31f042 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.665469392Z" level=info msg="Starting container: c0505a8661cdc1a071cd92175f57f3d72e119db5904b81e0776ae009c543fad8" id=eeb60b1e-6b5b-4d8d-b0a0-adb579e7486c name=/runtime.v1.RuntimeService/StartContainer
	Sep 25 10:53:38 multinode-529126 crio[958]: time="2023-09-25 10:53:38.673372754Z" level=info msg="Started container" PID=2504 containerID=c0505a8661cdc1a071cd92175f57f3d72e119db5904b81e0776ae009c543fad8 description=default/busybox-5bc68d56bd-jnhqs/busybox id=eeb60b1e-6b5b-4d8d-b0a0-adb579e7486c name=/runtime.v1.RuntimeService/StartContainer sandboxID=971fa6cbc5b5ec86ecb03099680af50dbb123a9560d5bfe6d9fb9649d45d4bb8
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c0505a8661cdc       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   971fa6cbc5b5e       busybox-5bc68d56bd-jnhqs
	e2aea337ff1df       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      25 seconds ago       Running             coredns                   0                   c9035236a9cd0       coredns-5dd5756b68-bl6dx
	009676d112661       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      25 seconds ago       Running             storage-provisioner       0                   603b00a1fe027       storage-provisioner
	7e843d1431a76       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      56 seconds ago       Running             kube-proxy                0                   9417430a95d10       kube-proxy-wlsv6
	55ee5b0b19544       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      56 seconds ago       Running             kindnet-cni               0                   a23eb327de76a       kindnet-62xf8
	3c94426d64f18       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      About a minute ago   Running             kube-scheduler            0                   7c5173e558685       kube-scheduler-multinode-529126
	921788796f43a       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      About a minute ago   Running             kube-controller-manager   0                   c08441bc790df       kube-controller-manager-multinode-529126
	5013735c0b755       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      About a minute ago   Running             kube-apiserver            0                   76e168ca75614       kube-apiserver-multinode-529126
	b748545c246f4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   06cb4908d6691       etcd-multinode-529126
	
	* 
	* ==> coredns [e2aea337ff1dfe40627407fde2c52fc79f11b99acfc0efba5b6c613e41a00a27] <==
	* [INFO] 10.244.1.2:42707 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008971s
	[INFO] 10.244.0.3:49333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095555s
	[INFO] 10.244.0.3:53576 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001306128s
	[INFO] 10.244.0.3:34503 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00005125s
	[INFO] 10.244.0.3:37819 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065696s
	[INFO] 10.244.0.3:39663 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000862338s
	[INFO] 10.244.0.3:54455 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058432s
	[INFO] 10.244.0.3:53854 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042216s
	[INFO] 10.244.0.3:57550 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004971s
	[INFO] 10.244.1.2:36323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110295s
	[INFO] 10.244.1.2:42792 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084349s
	[INFO] 10.244.1.2:38655 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067755s
	[INFO] 10.244.1.2:57773 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070574s
	[INFO] 10.244.0.3:49247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101416s
	[INFO] 10.244.0.3:42894 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084624s
	[INFO] 10.244.0.3:59120 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005969s
	[INFO] 10.244.0.3:35000 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058679s
	[INFO] 10.244.1.2:42414 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124909s
	[INFO] 10.244.1.2:57696 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102062s
	[INFO] 10.244.1.2:35001 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095666s
	[INFO] 10.244.1.2:42725 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101159s
	[INFO] 10.244.0.3:55943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097171s
	[INFO] 10.244.0.3:36500 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077727s
	[INFO] 10.244.0.3:48742 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000041972s
	[INFO] 10.244.0.3:42829 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005821s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-529126
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-529126
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=multinode-529126
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T10_52_33_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:52:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-529126
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 10:53:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 10:53:17 +0000   Mon, 25 Sep 2023 10:52:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 10:53:17 +0000   Mon, 25 Sep 2023 10:52:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 10:53:17 +0000   Mon, 25 Sep 2023 10:52:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 10:53:17 +0000   Mon, 25 Sep 2023 10:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-529126
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6b44eb10fdf42aab0976432761a0c68
	  System UUID:                f988f5ae-433e-47bf-8220-113d436bee01
	  Boot ID:                    a0198791-e836-4d6b-a7bd-f74954d514fc
	  Kernel Version:             5.15.0-1042-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-jnhqs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-bl6dx                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     57s
	  kube-system                 etcd-multinode-529126                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         69s
	  kube-system                 kindnet-62xf8                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      57s
	  kube-system                 kube-apiserver-multinode-529126             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-controller-manager-multinode-529126    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-wlsv6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-multinode-529126             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node multinode-529126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node multinode-529126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x8 over 75s)  kubelet          Node multinode-529126 status is now: NodeHasSufficientPID
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s                kubelet          Node multinode-529126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s                kubelet          Node multinode-529126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s                kubelet          Node multinode-529126 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node multinode-529126 event: Registered Node multinode-529126 in Controller
	  Normal  NodeReady                25s                kubelet          Node multinode-529126 status is now: NodeReady
	
	
	Name:               multinode-529126-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-529126-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:53:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-529126-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 10:53:35 +0000   Mon, 25 Sep 2023 10:53:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 10:53:35 +0000   Mon, 25 Sep 2023 10:53:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 10:53:35 +0000   Mon, 25 Sep 2023 10:53:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 10:53:35 +0000   Mon, 25 Sep 2023 10:53:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-529126-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf4f2041ca624f19beff201f464c81a2
	  System UUID:                dc053b63-e41d-4f22-8f12-2fc16a16a80e
	  Boot ID:                    a0198791-e836-4d6b-a7bd-f74954d514fc
	  Kernel Version:             5.15.0-1042-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-6xmht    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-j4jb6               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-bgjg6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 7s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9s (x5 over 10s)  kubelet          Node multinode-529126-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 10s)  kubelet          Node multinode-529126-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 10s)  kubelet          Node multinode-529126-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                node-controller  Node multinode-529126-m02 event: Registered Node multinode-529126-m02 in Controller
	  Normal  NodeReady                7s                kubelet          Node multinode-529126-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004937] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006662] FS-Cache: N-cookie d=00000000d258af6f{9p.inode} n=0000000062fac2c7
	[  +0.008748] FS-Cache: N-key=[8] '92a00f0200000000'
	[  +4.069384] FS-Cache: Duplicate cookie detected
	[  +0.004741] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006770] FS-Cache: O-cookie d=000000002b1f06e9{9P.session} n=00000000674afd8c
	[  +0.007526] FS-Cache: O-key=[10] '34323935323731363530'
	[  +0.005389] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006558] FS-Cache: N-cookie d=000000002b1f06e9{9P.session} n=000000000141c22c
	[  +0.007506] FS-Cache: N-key=[10] '34323935323731363530'
	[  +8.038207] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep25 10:44] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[  +1.008185] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[  +2.015763] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[  +4.063598] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[  +8.191166] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[Sep25 10:45] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	[ +33.020822] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 3a 5d 55 4e d6 7b ce 86 a6 6c 60 67 08 00
	
	* 
	* ==> etcd [b748545c246f45f6d69c4a25101ab8eeb19c91ad3b158af4cc6099ae702b4dba] <==
	* {"level":"info","ts":"2023-09-25T10:52:27.85012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-09-25T10:52:27.850253Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-09-25T10:52:27.851627Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-25T10:52:27.851765Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-25T10:52:27.851841Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-25T10:52:27.851923Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-25T10:52:27.851959Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-25T10:52:28.171926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-25T10:52:28.171967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-25T10:52:28.171982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-09-25T10:52:28.171995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-09-25T10:52:28.172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-25T10:52:28.172009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-09-25T10:52:28.172016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-25T10:52:28.173228Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-529126 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-25T10:52:28.173279Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T10:52:28.17335Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T10:52:28.173349Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:52:28.173415Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T10:52:28.173728Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-25T10:52:28.174287Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:52:28.17438Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:52:28.174414Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:52:28.174615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-09-25T10:52:28.174711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  10:53:42 up 36 min,  0 users,  load average: 1.07, 1.01, 0.72
	Linux multinode-529126 5.15.0-1042-gcp #50~20.04.1-Ubuntu SMP Mon Sep 11 03:30:57 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [55ee5b0b195441fa9c2c8c62f604549b8b86af118bb3c05207693c35984a560f] <==
	* I0925 10:52:46.352472       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0925 10:52:46.352527       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0925 10:52:46.444764       1 main.go:116] setting mtu 1500 for CNI 
	I0925 10:52:46.444852       1 main.go:146] kindnetd IP family: "ipv4"
	I0925 10:52:46.444877       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0925 10:53:16.582759       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0925 10:53:16.590196       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0925 10:53:16.590228       1 main.go:227] handling current node
	I0925 10:53:26.603903       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0925 10:53:26.603924       1 main.go:227] handling current node
	I0925 10:53:36.616260       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0925 10:53:36.616284       1 main.go:227] handling current node
	I0925 10:53:36.616293       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0925 10:53:36.616297       1 main.go:250] Node multinode-529126-m02 has CIDR [10.244.1.0/24] 
	I0925 10:53:36.616451       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [5013735c0b755577295ab8cade8dc5d68a421efa8e7481ed68d0f66a4e08455a] <==
	* I0925 10:52:30.245074       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0925 10:52:30.245215       1 shared_informer.go:318] Caches are synced for configmaps
	I0925 10:52:30.245625       1 aggregator.go:166] initial CRD sync complete...
	I0925 10:52:30.245685       1 autoregister_controller.go:141] Starting autoregister controller
	I0925 10:52:30.245719       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0925 10:52:30.245694       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0925 10:52:30.245751       1 cache.go:39] Caches are synced for autoregister controller
	I0925 10:52:30.246242       1 controller.go:624] quota admission added evaluator for: namespaces
	E0925 10:52:30.249025       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0925 10:52:30.452170       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0925 10:52:31.012231       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0925 10:52:31.015341       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0925 10:52:31.015356       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0925 10:52:31.369384       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 10:52:31.403600       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 10:52:31.462406       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0925 10:52:31.467687       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0925 10:52:31.468615       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 10:52:31.472563       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 10:52:32.089441       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 10:52:32.879967       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 10:52:32.888084       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0925 10:52:32.896344       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 10:52:45.297129       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 10:52:45.895463       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [921788796f43af6aed5c8ff6568fc81216870d7bf605e76599623d0bcd108837] <==
	* I0925 10:53:17.047363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.564µs"
	I0925 10:53:18.115910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.868µs"
	I0925 10:53:18.138877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="5.822812ms"
	I0925 10:53:18.138980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.06µs"
	I0925 10:53:19.908092       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0925 10:53:33.800181       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-529126-m02\" does not exist"
	I0925 10:53:33.810685       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-j4jb6"
	I0925 10:53:33.813685       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-529126-m02" podCIDRs=["10.244.1.0/24"]
	I0925 10:53:33.814249       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bgjg6"
	I0925 10:53:34.909914       1 event.go:307] "Event occurred" object="multinode-529126-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-529126-m02 event: Registered Node multinode-529126-m02 in Controller"
	I0925 10:53:34.909983       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-529126-m02"
	I0925 10:53:35.243165       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-529126-m02"
	I0925 10:53:37.600999       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0925 10:53:37.606656       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-6xmht"
	I0925 10:53:37.611764       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-jnhqs"
	I0925 10:53:37.619627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.862504ms"
	I0925 10:53:37.624247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.555864ms"
	I0925 10:53:37.624333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.422µs"
	I0925 10:53:37.627138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="87.901µs"
	I0925 10:53:37.628808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="70.479µs"
	I0925 10:53:39.155424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.737013ms"
	I0925 10:53:39.155492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.975µs"
	I0925 10:53:39.333688       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.668292ms"
	I0925 10:53:39.333779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.697µs"
	I0925 10:53:39.920092       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-6xmht" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-6xmht"
	
	* 
	* ==> kube-proxy [7e843d1431a763d1100efc6e7d55a235fdbd1084dfaa62fcbd088ea70d03800f] <==
	* I0925 10:52:46.378692       1 server_others.go:69] "Using iptables proxy"
	I0925 10:52:46.387721       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0925 10:52:46.404923       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0925 10:52:46.406507       1 server_others.go:152] "Using iptables Proxier"
	I0925 10:52:46.406530       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0925 10:52:46.406535       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0925 10:52:46.406555       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 10:52:46.406773       1 server.go:846] "Version info" version="v1.28.2"
	I0925 10:52:46.406785       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 10:52:46.407258       1 config.go:188] "Starting service config controller"
	I0925 10:52:46.407288       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 10:52:46.407306       1 config.go:315] "Starting node config controller"
	I0925 10:52:46.407317       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 10:52:46.407323       1 config.go:97] "Starting endpoint slice config controller"
	I0925 10:52:46.407329       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 10:52:46.507688       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0925 10:52:46.507727       1 shared_informer.go:318] Caches are synced for service config
	I0925 10:52:46.507694       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3c94426d64f18b70f258e347f2988a97e5401c0627e09f5e29653d95a2b82734] <==
	* W0925 10:52:30.269881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:52:30.269907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:52:30.269915       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 10:52:30.269932       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0925 10:52:30.269977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 10:52:30.269996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0925 10:52:30.270005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 10:52:30.270021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:52:30.270062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:52:30.270074       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:52:30.270088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:52:30.270042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 10:52:30.270138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 10:52:30.270165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0925 10:52:30.270176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:52:30.270186       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:52:30.270294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 10:52:30.270190       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:52:31.133970       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0925 10:52:31.134019       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0925 10:52:31.210038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:52:31.210073       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:52:31.235285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:52:31.235323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0925 10:52:33.965267       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 25 10:52:45 multinode-529126 kubelet[1595]: I0925 10:52:45.950298    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e04d98ce-ec4c-4cb4-8ae8-329b6240c025-xtables-lock\") pod \"kube-proxy-wlsv6\" (UID: \"e04d98ce-ec4c-4cb4-8ae8-329b6240c025\") " pod="kube-system/kube-proxy-wlsv6"
	Sep 25 10:52:45 multinode-529126 kubelet[1595]: I0925 10:52:45.950341    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e04d98ce-ec4c-4cb4-8ae8-329b6240c025-lib-modules\") pod \"kube-proxy-wlsv6\" (UID: \"e04d98ce-ec4c-4cb4-8ae8-329b6240c025\") " pod="kube-system/kube-proxy-wlsv6"
	Sep 25 10:52:45 multinode-529126 kubelet[1595]: I0925 10:52:45.950370    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/23f29aa7-de9c-43bc-950c-59009bd0d74e-cni-cfg\") pod \"kindnet-62xf8\" (UID: \"23f29aa7-de9c-43bc-950c-59009bd0d74e\") " pod="kube-system/kindnet-62xf8"
	Sep 25 10:52:45 multinode-529126 kubelet[1595]: I0925 10:52:45.950397    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23f29aa7-de9c-43bc-950c-59009bd0d74e-lib-modules\") pod \"kindnet-62xf8\" (UID: \"23f29aa7-de9c-43bc-950c-59009bd0d74e\") " pod="kube-system/kindnet-62xf8"
	Sep 25 10:52:45 multinode-529126 kubelet[1595]: I0925 10:52:45.950425    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7r7m\" (UniqueName: \"kubernetes.io/projected/23f29aa7-de9c-43bc-950c-59009bd0d74e-kube-api-access-r7r7m\") pod \"kindnet-62xf8\" (UID: \"23f29aa7-de9c-43bc-950c-59009bd0d74e\") " pod="kube-system/kindnet-62xf8"
	Sep 25 10:52:45 multinode-529126 kubelet[1595]: I0925 10:52:45.950458    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e04d98ce-ec4c-4cb4-8ae8-329b6240c025-kube-proxy\") pod \"kube-proxy-wlsv6\" (UID: \"e04d98ce-ec4c-4cb4-8ae8-329b6240c025\") " pod="kube-system/kube-proxy-wlsv6"
	Sep 25 10:52:46 multinode-529126 kubelet[1595]: W0925 10:52:46.245485    1595 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/crio-9417430a95d10c595960125c45eb0cdce920a0546e8eba494f5d6a8ede059458 WatchSource:0}: Error finding container 9417430a95d10c595960125c45eb0cdce920a0546e8eba494f5d6a8ede059458: Status 404 returned error can't find the container with id 9417430a95d10c595960125c45eb0cdce920a0546e8eba494f5d6a8ede059458
	Sep 25 10:52:46 multinode-529126 kubelet[1595]: W0925 10:52:46.245758    1595 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/crio-a23eb327de76a9348dd6bacd62fdfdf333f00fcf2871a50ad0e605ce28297ad5 WatchSource:0}: Error finding container a23eb327de76a9348dd6bacd62fdfdf333f00fcf2871a50ad0e605ce28297ad5: Status 404 returned error can't find the container with id a23eb327de76a9348dd6bacd62fdfdf333f00fcf2871a50ad0e605ce28297ad5
	Sep 25 10:52:47 multinode-529126 kubelet[1595]: I0925 10:52:47.067556    1595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wlsv6" podStartSLOduration=2.067509077 podCreationTimestamp="2023-09-25 10:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:52:47.057892363 +0000 UTC m=+14.202640195" watchObservedRunningTime="2023-09-25 10:52:47.067509077 +0000 UTC m=+14.212256910"
	Sep 25 10:52:47 multinode-529126 kubelet[1595]: I0925 10:52:47.067671    1595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-62xf8" podStartSLOduration=2.067649532 podCreationTimestamp="2023-09-25 10:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:52:47.06737459 +0000 UTC m=+14.212122422" watchObservedRunningTime="2023-09-25 10:52:47.067649532 +0000 UTC m=+14.212397366"
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: I0925 10:53:17.009837    1595 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: I0925 10:53:17.030195    1595 topology_manager.go:215] "Topology Admit Handler" podUID="04177a18-0dee-40d2-aa22-df41fb209e8c" podNamespace="kube-system" podName="storage-provisioner"
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: I0925 10:53:17.031415    1595 topology_manager.go:215] "Topology Admit Handler" podUID="a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274" podNamespace="kube-system" podName="coredns-5dd5756b68-bl6dx"
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: I0925 10:53:17.162189    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274-config-volume\") pod \"coredns-5dd5756b68-bl6dx\" (UID: \"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274\") " pod="kube-system/coredns-5dd5756b68-bl6dx"
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: I0925 10:53:17.162239    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glrq6\" (UniqueName: \"kubernetes.io/projected/04177a18-0dee-40d2-aa22-df41fb209e8c-kube-api-access-glrq6\") pod \"storage-provisioner\" (UID: \"04177a18-0dee-40d2-aa22-df41fb209e8c\") " pod="kube-system/storage-provisioner"
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: I0925 10:53:17.162348    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrj9\" (UniqueName: \"kubernetes.io/projected/a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274-kube-api-access-tkrj9\") pod \"coredns-5dd5756b68-bl6dx\" (UID: \"a90d3ac5-8e74-4c8e-8e26-54a4ca4e1274\") " pod="kube-system/coredns-5dd5756b68-bl6dx"
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: I0925 10:53:17.162408    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/04177a18-0dee-40d2-aa22-df41fb209e8c-tmp\") pod \"storage-provisioner\" (UID: \"04177a18-0dee-40d2-aa22-df41fb209e8c\") " pod="kube-system/storage-provisioner"
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: W0925 10:53:17.377552    1595 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/crio-603b00a1fe02749e8f602e22d322641f93bbc0ba73babdf55be0695c5cfc533e WatchSource:0}: Error finding container 603b00a1fe02749e8f602e22d322641f93bbc0ba73babdf55be0695c5cfc533e: Status 404 returned error can't find the container with id 603b00a1fe02749e8f602e22d322641f93bbc0ba73babdf55be0695c5cfc533e
	Sep 25 10:53:17 multinode-529126 kubelet[1595]: W0925 10:53:17.377826    1595 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/crio-c9035236a9cd083a3f96264e34fbdcba1cb45b1797bbdca06e6c2c8fe4f22eb8 WatchSource:0}: Error finding container c9035236a9cd083a3f96264e34fbdcba1cb45b1797bbdca06e6c2c8fe4f22eb8: Status 404 returned error can't find the container with id c9035236a9cd083a3f96264e34fbdcba1cb45b1797bbdca06e6c2c8fe4f22eb8
	Sep 25 10:53:18 multinode-529126 kubelet[1595]: I0925 10:53:18.115979    1595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bl6dx" podStartSLOduration=33.115927023 podCreationTimestamp="2023-09-25 10:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:53:18.115569217 +0000 UTC m=+45.260317049" watchObservedRunningTime="2023-09-25 10:53:18.115927023 +0000 UTC m=+45.260674856"
	Sep 25 10:53:18 multinode-529126 kubelet[1595]: I0925 10:53:18.132986    1595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.13292855 podCreationTimestamp="2023-09-25 10:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:53:18.12446724 +0000 UTC m=+45.269215074" watchObservedRunningTime="2023-09-25 10:53:18.13292855 +0000 UTC m=+45.277676420"
	Sep 25 10:53:37 multinode-529126 kubelet[1595]: I0925 10:53:37.617190    1595 topology_manager.go:215] "Topology Admit Handler" podUID="df316e95-c449-4069-a71e-518e7338860e" podNamespace="default" podName="busybox-5bc68d56bd-jnhqs"
	Sep 25 10:53:37 multinode-529126 kubelet[1595]: I0925 10:53:37.770690    1595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjmwl\" (UniqueName: \"kubernetes.io/projected/df316e95-c449-4069-a71e-518e7338860e-kube-api-access-xjmwl\") pod \"busybox-5bc68d56bd-jnhqs\" (UID: \"df316e95-c449-4069-a71e-518e7338860e\") " pod="default/busybox-5bc68d56bd-jnhqs"
	Sep 25 10:53:37 multinode-529126 kubelet[1595]: W0925 10:53:37.957334    1595 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/crio-971fa6cbc5b5ec86ecb03099680af50dbb123a9560d5bfe6d9fb9649d45d4bb8 WatchSource:0}: Error finding container 971fa6cbc5b5ec86ecb03099680af50dbb123a9560d5bfe6d9fb9649d45d4bb8: Status 404 returned error can't find the container with id 971fa6cbc5b5ec86ecb03099680af50dbb123a9560d5bfe6d9fb9649d45d4bb8
	Sep 25 10:53:39 multinode-529126 kubelet[1595]: I0925 10:53:39.151785    1595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-jnhqs" podStartSLOduration=1.537100261 podCreationTimestamp="2023-09-25 10:53:37 +0000 UTC" firstStartedPulling="2023-09-25 10:53:37.96102913 +0000 UTC m=+65.105776951" lastFinishedPulling="2023-09-25 10:53:38.575661464 +0000 UTC m=+65.720409286" observedRunningTime="2023-09-25 10:53:39.151500367 +0000 UTC m=+66.296248201" watchObservedRunningTime="2023-09-25 10:53:39.151732596 +0000 UTC m=+66.296480430"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-529126 -n multinode-529126
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-529126 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.1504377201.exe start -p running-upgrade-735126 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.1504377201.exe start -p running-upgrade-735126 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m9.475809993s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-735126 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-735126 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.213947088s)

                                                
                                                
-- stdout --
	* [running-upgrade-735126] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-735126 in cluster running-upgrade-735126
	* Pulling base image ...
	* Updating the running docker "running-upgrade-735126" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:05:37.093265  181773 out.go:296] Setting OutFile to fd 1 ...
	I0925 11:05:37.093406  181773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:05:37.093417  181773 out.go:309] Setting ErrFile to fd 2...
	I0925 11:05:37.093424  181773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:05:37.093675  181773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 11:05:37.094488  181773 out.go:303] Setting JSON to false
	I0925 11:05:37.096233  181773 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2889,"bootTime":1695637048,"procs":391,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 11:05:37.096342  181773 start.go:138] virtualization: kvm guest
	I0925 11:05:37.098540  181773 out.go:177] * [running-upgrade-735126] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 11:05:37.100600  181773 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 11:05:37.100858  181773 notify.go:220] Checking for updates...
	I0925 11:05:37.102133  181773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:05:37.103671  181773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 11:05:37.105109  181773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 11:05:37.106589  181773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 11:05:37.107904  181773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:05:37.109681  181773 config.go:182] Loaded profile config "running-upgrade-735126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0925 11:05:37.109707  181773 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0925 11:05:37.111598  181773 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0925 11:05:37.112859  181773 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 11:05:37.138001  181773 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 11:05:37.138079  181773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 11:05:37.206566  181773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:87 SystemTime:2023-09-25 11:05:37.195517298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 11:05:37.206704  181773 docker.go:294] overlay module found
	I0925 11:05:37.208733  181773 out.go:177] * Using the docker driver based on existing profile
	I0925 11:05:37.210162  181773 start.go:298] selected driver: docker
	I0925 11:05:37.210178  181773 start.go:902] validating driver "docker" against &{Name:running-upgrade-735126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-735126 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0925 11:05:37.210281  181773 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:05:37.211228  181773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 11:05:37.265539  181773 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:87 SystemTime:2023-09-25 11:05:37.257261549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 11:05:37.265805  181773 cni.go:84] Creating CNI manager for ""
	I0925 11:05:37.265825  181773 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0925 11:05:37.265832  181773 start_flags.go:321] config:
	{Name:running-upgrade-735126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-735126 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.4 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s}
	I0925 11:05:37.267774  181773 out.go:177] * Starting control plane node running-upgrade-735126 in cluster running-upgrade-735126
	I0925 11:05:37.269124  181773 cache.go:122] Beginning downloading kic base image for docker with crio
	I0925 11:05:37.270398  181773 out.go:177] * Pulling base image ...
	I0925 11:05:37.271706  181773 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0925 11:05:37.271733  181773 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0925 11:05:37.292777  181773 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I0925 11:05:37.292808  181773 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	W0925 11:05:37.309969  181773 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0925 11:05:37.310099  181773 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/running-upgrade-735126/config.json ...
	I0925 11:05:37.310193  181773 cache.go:107] acquiring lock: {Name:mk20c5c3f16f9925c6b32fe6e3873ada9cc8aac8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310263  181773 cache.go:107] acquiring lock: {Name:mk90cf264abfe81cc1035d976549c6bcb442bd29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310264  181773 cache.go:107] acquiring lock: {Name:mk53145c4a57975dfc4e03bbaca44ebaec45056a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310184  181773 cache.go:107] acquiring lock: {Name:mk90562eaed4e709b85074c83642db0602b046c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310232  181773 cache.go:107] acquiring lock: {Name:mkfaffbf63c39a896e67a82c228eab98a3e30851 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310350  181773 cache.go:195] Successfully downloaded all kic artifacts
	I0925 11:05:37.310345  181773 cache.go:107] acquiring lock: {Name:mkc69ea77dcf76b041b10222bc392488e144fb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310383  181773 start.go:365] acquiring machines lock for running-upgrade-735126: {Name:mk7d6682851b8e65eccecd112f414a4737339fb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310435  181773 cache.go:107] acquiring lock: {Name:mkce7fc029b63ec02d583579fd9316b475911c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310440  181773 cache.go:107] acquiring lock: {Name:mk07508feecb52ef97612b32f33373956037c251 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:05:37.310496  181773 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0925 11:05:37.310492  181773 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0925 11:05:37.310504  181773 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0925 11:05:37.310510  181773 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 339.23µs
	I0925 11:05:37.310514  181773 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 236.669µs
	I0925 11:05:37.310516  181773 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 294.006µs
	I0925 11:05:37.310524  181773 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0925 11:05:37.310522  181773 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0925 11:05:37.310530  181773 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0925 11:05:37.310538  181773 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0925 11:05:37.310561  181773 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 374.05µs
	I0925 11:05:37.310574  181773 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0925 11:05:37.310577  181773 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0925 11:05:37.310585  181773 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0925 11:05:37.310570  181773 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0925 11:05:37.310588  181773 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 191.654µs
	I0925 11:05:37.310603  181773 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 341.824µs
	I0925 11:05:37.310874  181773 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0925 11:05:37.310800  181773 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0925 11:05:37.310568  181773 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0925 11:05:37.310594  181773 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 197.911µs
	I0925 11:05:37.310907  181773 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0925 11:05:37.310903  181773 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 641.122µs
	I0925 11:05:37.310928  181773 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0925 11:05:37.310943  181773 cache.go:87] Successfully saved all images to host disk.
	I0925 11:05:37.310570  181773 start.go:369] acquired machines lock for "running-upgrade-735126" in 167.95µs
	I0925 11:05:37.311042  181773 start.go:96] Skipping create...Using existing machine configuration
	I0925 11:05:37.311057  181773 fix.go:54] fixHost starting: m01
	I0925 11:05:37.311368  181773 cli_runner.go:164] Run: docker container inspect running-upgrade-735126 --format={{.State.Status}}
	I0925 11:05:37.327698  181773 fix.go:102] recreateIfNeeded on running-upgrade-735126: state=Running err=<nil>
	W0925 11:05:37.327733  181773 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 11:05:37.329814  181773 out.go:177] * Updating the running docker "running-upgrade-735126" container ...
	I0925 11:05:37.331121  181773 machine.go:88] provisioning docker machine ...
	I0925 11:05:37.331143  181773 ubuntu.go:169] provisioning hostname "running-upgrade-735126"
	I0925 11:05:37.331193  181773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-735126
	I0925 11:05:37.346708  181773 main.go:141] libmachine: Using SSH client type: native
	I0925 11:05:37.347082  181773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0925 11:05:37.347098  181773 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-735126 && echo "running-upgrade-735126" | sudo tee /etc/hostname
	I0925 11:05:37.465351  181773 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-735126
	
	I0925 11:05:37.465435  181773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-735126
	I0925 11:05:37.481400  181773 main.go:141] libmachine: Using SSH client type: native
	I0925 11:05:37.481721  181773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0925 11:05:37.481740  181773 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-735126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-735126/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-735126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 11:05:37.584358  181773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 11:05:37.584389  181773 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17297-5744/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-5744/.minikube}
	I0925 11:05:37.584435  181773 ubuntu.go:177] setting up certificates
	I0925 11:05:37.584446  181773 provision.go:83] configureAuth start
	I0925 11:05:37.584505  181773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-735126
	I0925 11:05:37.601845  181773 provision.go:138] copyHostCerts
	I0925 11:05:37.601923  181773 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem, removing ...
	I0925 11:05:37.601938  181773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem
	I0925 11:05:37.602010  181773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem (1123 bytes)
	I0925 11:05:37.602129  181773 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem, removing ...
	I0925 11:05:37.602141  181773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem
	I0925 11:05:37.602178  181773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem (1675 bytes)
	I0925 11:05:37.602254  181773 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem, removing ...
	I0925 11:05:37.602264  181773 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem
	I0925 11:05:37.602300  181773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem (1078 bytes)
	I0925 11:05:37.602371  181773 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-735126 san=[172.17.0.4 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-735126]
	I0925 11:05:37.868953  181773 provision.go:172] copyRemoteCerts
	I0925 11:05:37.869008  181773 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 11:05:37.869039  181773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-735126
	I0925 11:05:37.885433  181773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/running-upgrade-735126/id_rsa Username:docker}
	I0925 11:05:37.964790  181773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 11:05:37.985168  181773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0925 11:05:38.003054  181773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 11:05:38.020861  181773 provision.go:86] duration metric: configureAuth took 436.394368ms
	I0925 11:05:38.020887  181773 ubuntu.go:193] setting minikube options for container-runtime
	I0925 11:05:38.021076  181773 config.go:182] Loaded profile config "running-upgrade-735126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0925 11:05:38.021179  181773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-735126
	I0925 11:05:38.037855  181773 main.go:141] libmachine: Using SSH client type: native
	I0925 11:05:38.038238  181773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32946 <nil> <nil>}
	I0925 11:05:38.038259  181773 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0925 11:05:38.447545  181773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0925 11:05:38.447571  181773 machine.go:91] provisioned docker machine in 1.116437472s
	I0925 11:05:38.447584  181773 start.go:300] post-start starting for "running-upgrade-735126" (driver="docker")
	I0925 11:05:38.447597  181773 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 11:05:38.447676  181773 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 11:05:38.447731  181773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-735126
	I0925 11:05:38.466876  181773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/running-upgrade-735126/id_rsa Username:docker}
	I0925 11:05:38.548966  181773 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 11:05:38.551779  181773 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0925 11:05:38.551805  181773 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0925 11:05:38.551814  181773 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0925 11:05:38.551820  181773 info.go:137] Remote host: Ubuntu 19.10
	I0925 11:05:38.551829  181773 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/addons for local assets ...
	I0925 11:05:38.551888  181773 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/files for local assets ...
	I0925 11:05:38.551956  181773 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> 125162.pem in /etc/ssl/certs
	I0925 11:05:38.552046  181773 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 11:05:38.558395  181773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem --> /etc/ssl/certs/125162.pem (1708 bytes)
	I0925 11:05:38.575144  181773 start.go:303] post-start completed in 127.543644ms
	I0925 11:05:38.575220  181773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 11:05:38.575264  181773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-735126
	I0925 11:05:38.593047  181773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/running-upgrade-735126/id_rsa Username:docker}
	I0925 11:05:38.673518  181773 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0925 11:05:38.677411  181773 fix.go:56] fixHost completed within 1.366348297s
	I0925 11:05:38.677432  181773 start.go:83] releasing machines lock for "running-upgrade-735126", held for 1.366402966s
	I0925 11:05:38.677496  181773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-735126
	I0925 11:05:38.694626  181773 ssh_runner.go:195] Run: cat /version.json
	I0925 11:05:38.694645  181773 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 11:05:38.694686  181773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-735126
	I0925 11:05:38.694712  181773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-735126
	I0925 11:05:38.714279  181773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/running-upgrade-735126/id_rsa Username:docker}
	I0925 11:05:38.714486  181773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32946 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/running-upgrade-735126/id_rsa Username:docker}
	W0925 11:05:38.825505  181773 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0925 11:05:38.825569  181773 ssh_runner.go:195] Run: systemctl --version
	I0925 11:05:38.829656  181773 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0925 11:05:38.884816  181773 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0925 11:05:38.889141  181773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 11:05:38.904556  181773 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0925 11:05:38.904676  181773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 11:05:38.925273  181773 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 11:05:38.925297  181773 start.go:469] detecting cgroup driver to use...
	I0925 11:05:38.925331  181773 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0925 11:05:38.925379  181773 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 11:05:38.946967  181773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:05:38.956464  181773 docker.go:197] disabling cri-docker service (if available) ...
	I0925 11:05:38.956516  181773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0925 11:05:38.965158  181773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0925 11:05:38.973613  181773 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0925 11:05:38.982282  181773 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0925 11:05:38.982340  181773 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0925 11:05:39.058401  181773 docker.go:213] disabling docker service ...
	I0925 11:05:39.058462  181773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0925 11:05:39.067580  181773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0925 11:05:39.076780  181773 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0925 11:05:39.150407  181773 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0925 11:05:39.224717  181773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0925 11:05:39.236194  181773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:05:39.250559  181773 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0925 11:05:39.250618  181773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 11:05:39.261931  181773 out.go:177] 
	W0925 11:05:39.263346  181773 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0925 11:05:39.263365  181773 out.go:239] * 
	* 
	W0925 11:05:39.264478  181773 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:05:39.266165  181773 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-735126 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-25 11:05:39.285836442 +0000 UTC m=+1930.886368532
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-735126
helpers_test.go:235: (dbg) docker inspect running-upgrade-735126:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "69af8bbb42b517d69e4b51987e04dda45bcd23473808a19dfa034d38b3d28937",
	        "Created": "2023-09-25T11:04:27.943694136Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165959,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-25T11:04:31.185230283Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/69af8bbb42b517d69e4b51987e04dda45bcd23473808a19dfa034d38b3d28937/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/69af8bbb42b517d69e4b51987e04dda45bcd23473808a19dfa034d38b3d28937/hostname",
	        "HostsPath": "/var/lib/docker/containers/69af8bbb42b517d69e4b51987e04dda45bcd23473808a19dfa034d38b3d28937/hosts",
	        "LogPath": "/var/lib/docker/containers/69af8bbb42b517d69e4b51987e04dda45bcd23473808a19dfa034d38b3d28937/69af8bbb42b517d69e4b51987e04dda45bcd23473808a19dfa034d38b3d28937-json.log",
	        "Name": "/running-upgrade-735126",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-735126:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/441fb039840ba5f989e71458c217f43c8e1526d9ba9f353937fbd7c0a591385e-init/diff:/var/lib/docker/overlay2/0265b5cb525aaa49427cb36a86e9e2431c2fc9a9501418c1951d833b27b11771/diff:/var/lib/docker/overlay2/072da75404b70fae9eb87ccfce1ce9cfb75cc684159dd39d080d2675fc2fa686/diff:/var/lib/docker/overlay2/e66329d02ca4892798edb03f1e88c91d964088a8d191cf141901ddd9f511175d/diff:/var/lib/docker/overlay2/cab604cd8219e909f0538d4803eecc5d64dc498f8e65b0d9882132280a26144e/diff:/var/lib/docker/overlay2/5e389c062d243a1137a0cd0cd12e9beac5f5f126b66ab8ed046f95015d7bfae7/diff:/var/lib/docker/overlay2/891d4ea560d6a8949ae401689e7b073560a344cdab9d46262e7d43869484f6b5/diff:/var/lib/docker/overlay2/01159c271fd4a78cbfb17432570ded3b271b678652111f0b07ba726c1021c52d/diff:/var/lib/docker/overlay2/c728d722589937a73597827ba45859c1e6b50197995c85ef4bba21bef05184c3/diff:/var/lib/docker/overlay2/b3ae09e157b8b2a0078e13b23bfd2e40502581c75b9750f265f218f81086feca/diff:/var/lib/docker/overlay2/872d8d
757dd3b989cc09d048691dca7ce1c91990bb238de4ab724d067216881c/diff:/var/lib/docker/overlay2/e7767ebe1d6c5af03334a0d5bdc2604cdbd140b72b0eff45ea0701c2dab99b57/diff:/var/lib/docker/overlay2/1663605e40f086ff68807b21ca3c45192fe44964eafd9be8c2fb40de4de2a1d3/diff:/var/lib/docker/overlay2/43f09043d1a87c7680911254b5f7f7889406bdcd88968d1be9ec20ece9f542f0/diff:/var/lib/docker/overlay2/f9d5b1fa8f2edef82cd94d8711a1ae150fac129bd16ba9461096dba26fb1a7be/diff:/var/lib/docker/overlay2/6c810ebe915ed5856fb6c65dbb8d5a0af6df7da0ba8ead4b48b38fa6e215aedf/diff:/var/lib/docker/overlay2/ad26d900ae4709d6ebfcc3f74087b8a1d98b7537938a0dea4d48638b03169cf1/diff:/var/lib/docker/overlay2/1e4962b68c4d7c6f03934e3a86d3d185b5b9e8e929aeb9aa2f01385380f55868/diff:/var/lib/docker/overlay2/e633d5f26f70ceeebc081a04a40c99497f33f344f2392a1444fa6a7bcb31df87/diff:/var/lib/docker/overlay2/854538f65447af5d282697db706fc7fbfa73dda95044f9ad0578b8e85e8d9208/diff:/var/lib/docker/overlay2/37b8aea8f49b241a877a111d15d94abdd73bb9a460f886f2cb426a07b5a6cbdd/diff:/var/lib/d
ocker/overlay2/4dab3d5b58f9b3436c8a2867008b3b450c8b3cd763b461ff02c0481bcbc48794/diff",
	                "MergedDir": "/var/lib/docker/overlay2/441fb039840ba5f989e71458c217f43c8e1526d9ba9f353937fbd7c0a591385e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/441fb039840ba5f989e71458c217f43c8e1526d9ba9f353937fbd7c0a591385e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/441fb039840ba5f989e71458c217f43c8e1526d9ba9f353937fbd7c0a591385e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-735126",
	                "Source": "/var/lib/docker/volumes/running-upgrade-735126/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-735126",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-735126",
	                "name.minikube.sigs.k8s.io": "running-upgrade-735126",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5bc79e795f87f7398fdf1280d7af4e08ea960b8ae478d83a7ed46dd8879852a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b5bc79e795f8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "4d94f86a20332464a7e5228555db7ab3feaceab88bf4e947f130f549193a5715",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.4",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:04",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "3a297863f9c4011620511d36718deaa7403f8eae8b2fca87163154cc67121174",
	                    "EndpointID": "4d94f86a20332464a7e5228555db7ab3feaceab88bf4e947f130f549193a5715",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.4",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:04",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-735126 -n running-upgrade-735126
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-735126 -n running-upgrade-735126: exit status 4 (297.7756ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0925 11:05:39.571639  182434 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-735126" does not appear in /home/jenkins/minikube-integration/17297-5744/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-735126" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-735126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-735126
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-735126: (2.076738706s)
--- FAIL: TestRunningBinaryUpgrade (74.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (98.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.3896662888.exe start -p stopped-upgrade-439109 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.3896662888.exe start -p stopped-upgrade-439109 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m29.804381811s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.3896662888.exe -p stopped-upgrade-439109 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.3896662888.exe -p stopped-upgrade-439109 stop: (1.612760656s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-439109 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-439109 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.97124485s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-439109] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-439109 in cluster stopped-upgrade-439109
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-439109" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:04:34.838524  167358 out.go:296] Setting OutFile to fd 1 ...
	I0925 11:04:34.838737  167358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:04:34.838746  167358 out.go:309] Setting ErrFile to fd 2...
	I0925 11:04:34.838750  167358 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:04:34.838908  167358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 11:04:34.839540  167358 out.go:303] Setting JSON to false
	I0925 11:04:34.840609  167358 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2827,"bootTime":1695637048,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 11:04:34.840689  167358 start.go:138] virtualization: kvm guest
	I0925 11:04:34.843174  167358 out.go:177] * [stopped-upgrade-439109] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 11:04:34.844724  167358 notify.go:220] Checking for updates...
	I0925 11:04:34.846158  167358 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 11:04:34.847576  167358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:04:34.848977  167358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 11:04:34.850323  167358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 11:04:34.851718  167358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 11:04:34.853092  167358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:04:34.854902  167358 config.go:182] Loaded profile config "stopped-upgrade-439109": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0925 11:04:34.854929  167358 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0925 11:04:34.856967  167358 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0925 11:04:34.858941  167358 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 11:04:34.884449  167358 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 11:04:34.884555  167358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 11:04:34.949473  167358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:67 SystemTime:2023-09-25 11:04:34.94066236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 11:04:34.949631  167358 docker.go:294] overlay module found
	I0925 11:04:34.951699  167358 out.go:177] * Using the docker driver based on existing profile
	I0925 11:04:34.953171  167358 start.go:298] selected driver: docker
	I0925 11:04:34.953183  167358 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-439109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-439109 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0925 11:04:34.953259  167358 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:04:34.954015  167358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 11:04:35.019679  167358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:67 SystemTime:2023-09-25 11:04:35.010928306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 11:04:35.019945  167358 cni.go:84] Creating CNI manager for ""
	I0925 11:04:35.019967  167358 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0925 11:04:35.019973  167358 start_flags.go:321] config:
	{Name:stopped-upgrade-439109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-439109 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s}
	I0925 11:04:35.022100  167358 out.go:177] * Starting control plane node stopped-upgrade-439109 in cluster stopped-upgrade-439109
	I0925 11:04:35.023543  167358 cache.go:122] Beginning downloading kic base image for docker with crio
	I0925 11:04:35.024992  167358 out.go:177] * Pulling base image ...
	I0925 11:04:35.026246  167358 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0925 11:04:35.026264  167358 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0925 11:04:35.042177  167358 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I0925 11:04:35.042201  167358 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	W0925 11:04:35.069735  167358 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0925 11:04:35.069907  167358 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/stopped-upgrade-439109/config.json ...
	I0925 11:04:35.069959  167358 cache.go:107] acquiring lock: {Name:mk90562eaed4e709b85074c83642db0602b046c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.069961  167358 cache.go:107] acquiring lock: {Name:mk20c5c3f16f9925c6b32fe6e3873ada9cc8aac8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.070001  167358 cache.go:107] acquiring lock: {Name:mk53145c4a57975dfc4e03bbaca44ebaec45056a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.070001  167358 cache.go:107] acquiring lock: {Name:mkfaffbf63c39a896e67a82c228eab98a3e30851 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.070027  167358 cache.go:107] acquiring lock: {Name:mkc69ea77dcf76b041b10222bc392488e144fb09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.070075  167358 cache.go:107] acquiring lock: {Name:mk90cf264abfe81cc1035d976549c6bcb442bd29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.070097  167358 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0925 11:04:35.070108  167358 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 116.695µs
	I0925 11:04:35.070112  167358 cache.go:107] acquiring lock: {Name:mkce7fc029b63ec02d583579fd9316b475911c62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.070127  167358 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0925 11:04:35.070126  167358 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0925 11:04:35.070155  167358 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0925 11:04:35.070159  167358 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0925 11:04:35.070151  167358 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 202.192µs
	I0925 11:04:35.070162  167358 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0925 11:04:35.070162  167358 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 158.518µs
	I0925 11:04:35.070169  167358 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0925 11:04:35.070173  167358 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0925 11:04:35.070173  167358 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 65.254µs
	I0925 11:04:35.070183  167358 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0925 11:04:35.070170  167358 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 101.149µs
	I0925 11:04:35.070190  167358 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0925 11:04:35.070167  167358 cache.go:107] acquiring lock: {Name:mk07508feecb52ef97612b32f33373956037c251 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.070252  167358 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0925 11:04:35.070265  167358 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 285.898µs
	I0925 11:04:35.070273  167358 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0925 11:04:35.070323  167358 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0925 11:04:35.070342  167358 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 301.298µs
	I0925 11:04:35.070355  167358 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0925 11:04:35.070952  167358 cache.go:195] Successfully downloaded all kic artifacts
	I0925 11:04:35.070983  167358 start.go:365] acquiring machines lock for stopped-upgrade-439109: {Name:mk665356eec51f19364b9a6d825a024ffbef1356 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 11:04:35.071037  167358 start.go:369] acquired machines lock for "stopped-upgrade-439109" in 44.712µs
	I0925 11:04:35.071054  167358 start.go:96] Skipping create...Using existing machine configuration
	I0925 11:04:35.071060  167358 fix.go:54] fixHost starting: m01
	I0925 11:04:35.071255  167358 cli_runner.go:164] Run: docker container inspect stopped-upgrade-439109 --format={{.State.Status}}
	I0925 11:04:35.091434  167358 fix.go:102] recreateIfNeeded on stopped-upgrade-439109: state=Stopped err=<nil>
	W0925 11:04:35.091483  167358 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 11:04:35.093808  167358 out.go:177] * Restarting existing docker container for "stopped-upgrade-439109" ...
	I0925 11:04:35.095291  167358 cli_runner.go:164] Run: docker start stopped-upgrade-439109
	I0925 11:04:35.265905  167358 cache.go:115] /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0925 11:04:35.265927  167358 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 195.981681ms
	I0925 11:04:35.265946  167358 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0925 11:04:35.265960  167358 cache.go:87] Successfully saved all images to host disk.
	I0925 11:04:35.412764  167358 cli_runner.go:164] Run: docker container inspect stopped-upgrade-439109 --format={{.State.Status}}
	I0925 11:04:35.442842  167358 kic.go:426] container "stopped-upgrade-439109" state is running.
	I0925 11:04:35.443184  167358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-439109
	I0925 11:04:35.464201  167358 profile.go:148] Saving config to /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/stopped-upgrade-439109/config.json ...
	I0925 11:04:35.464485  167358 machine.go:88] provisioning docker machine ...
	I0925 11:04:35.464505  167358 ubuntu.go:169] provisioning hostname "stopped-upgrade-439109"
	I0925 11:04:35.464566  167358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-439109
	I0925 11:04:35.490303  167358 main.go:141] libmachine: Using SSH client type: native
	I0925 11:04:35.490646  167358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0925 11:04:35.490659  167358 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-439109 && echo "stopped-upgrade-439109" | sudo tee /etc/hostname
	I0925 11:04:35.491269  167358 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60892->127.0.0.1:32949: read: connection reset by peer
	I0925 11:04:38.613102  167358 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-439109
	
	I0925 11:04:38.613191  167358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-439109
	I0925 11:04:38.630164  167358 main.go:141] libmachine: Using SSH client type: native
	I0925 11:04:38.630618  167358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0925 11:04:38.630650  167358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-439109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-439109/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-439109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 11:04:38.740512  167358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 11:04:38.740547  167358 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17297-5744/.minikube CaCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17297-5744/.minikube}
	I0925 11:04:38.740581  167358 ubuntu.go:177] setting up certificates
	I0925 11:04:38.740594  167358 provision.go:83] configureAuth start
	I0925 11:04:38.740661  167358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-439109
	I0925 11:04:38.758349  167358 provision.go:138] copyHostCerts
	I0925 11:04:38.758397  167358 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem, removing ...
	I0925 11:04:38.758405  167358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem
	I0925 11:04:38.758462  167358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/ca.pem (1078 bytes)
	I0925 11:04:38.758551  167358 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem, removing ...
	I0925 11:04:38.758556  167358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem
	I0925 11:04:38.758578  167358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/cert.pem (1123 bytes)
	I0925 11:04:38.758633  167358 exec_runner.go:144] found /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem, removing ...
	I0925 11:04:38.758637  167358 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem
	I0925 11:04:38.758657  167358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17297-5744/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17297-5744/.minikube/key.pem (1675 bytes)
	I0925 11:04:38.758749  167358 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-439109 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-439109]
	I0925 11:04:38.963734  167358 provision.go:172] copyRemoteCerts
	I0925 11:04:38.963804  167358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 11:04:38.963845  167358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-439109
	I0925 11:04:38.981937  167358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/stopped-upgrade-439109/id_rsa Username:docker}
	I0925 11:04:39.063888  167358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 11:04:39.081542  167358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0925 11:04:39.098622  167358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0925 11:04:39.116092  167358 provision.go:86] duration metric: configureAuth took 375.48166ms
	I0925 11:04:39.116120  167358 ubuntu.go:193] setting minikube options for container-runtime
	I0925 11:04:39.116326  167358 config.go:182] Loaded profile config "stopped-upgrade-439109": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0925 11:04:39.116452  167358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-439109
	I0925 11:04:39.135363  167358 main.go:141] libmachine: Using SSH client type: native
	I0925 11:04:39.135842  167358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32949 <nil> <nil>}
	I0925 11:04:39.135865  167358 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0925 11:04:40.938111  167358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0925 11:04:40.938151  167358 machine.go:91] provisioned docker machine in 5.473653471s
	I0925 11:04:40.938163  167358 start.go:300] post-start starting for "stopped-upgrade-439109" (driver="docker")
	I0925 11:04:40.938178  167358 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 11:04:40.938278  167358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 11:04:40.938330  167358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-439109
	I0925 11:04:40.961063  167358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/stopped-upgrade-439109/id_rsa Username:docker}
	I0925 11:04:41.044953  167358 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 11:04:41.052245  167358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0925 11:04:41.052278  167358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0925 11:04:41.052293  167358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0925 11:04:41.052301  167358 info.go:137] Remote host: Ubuntu 19.10
	I0925 11:04:41.052312  167358 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/addons for local assets ...
	I0925 11:04:41.052376  167358 filesync.go:126] Scanning /home/jenkins/minikube-integration/17297-5744/.minikube/files for local assets ...
	I0925 11:04:41.052476  167358 filesync.go:149] local asset: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem -> 125162.pem in /etc/ssl/certs
	I0925 11:04:41.052581  167358 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 11:04:41.060850  167358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/ssl/certs/125162.pem --> /etc/ssl/certs/125162.pem (1708 bytes)
	I0925 11:04:41.078050  167358 start.go:303] post-start completed in 139.870688ms
	I0925 11:04:41.078125  167358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 11:04:41.078170  167358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-439109
	I0925 11:04:41.098501  167358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/stopped-upgrade-439109/id_rsa Username:docker}
	I0925 11:04:41.177752  167358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0925 11:04:41.181648  167358 fix.go:56] fixHost completed within 6.110580406s
	I0925 11:04:41.181672  167358 start.go:83] releasing machines lock for "stopped-upgrade-439109", held for 6.110623068s
	I0925 11:04:41.181738  167358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-439109
	I0925 11:04:41.201820  167358 ssh_runner.go:195] Run: cat /version.json
	I0925 11:04:41.201869  167358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-439109
	I0925 11:04:41.201886  167358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 11:04:41.201958  167358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-439109
	I0925 11:04:41.219743  167358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/stopped-upgrade-439109/id_rsa Username:docker}
	I0925 11:04:41.220261  167358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32949 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/stopped-upgrade-439109/id_rsa Username:docker}
	W0925 11:04:41.299943  167358 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0925 11:04:41.300026  167358 ssh_runner.go:195] Run: systemctl --version
	I0925 11:04:41.332206  167358 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0925 11:04:41.394852  167358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0925 11:04:41.399036  167358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 11:04:41.413643  167358 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0925 11:04:41.413712  167358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 11:04:41.434886  167358 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 11:04:41.434909  167358 start.go:469] detecting cgroup driver to use...
	I0925 11:04:41.434937  167358 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0925 11:04:41.434969  167358 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 11:04:41.457870  167358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 11:04:41.467325  167358 docker.go:197] disabling cri-docker service (if available) ...
	I0925 11:04:41.467376  167358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0925 11:04:41.478097  167358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0925 11:04:41.487312  167358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0925 11:04:41.497107  167358 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0925 11:04:41.497164  167358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0925 11:04:41.564885  167358 docker.go:213] disabling docker service ...
	I0925 11:04:41.564949  167358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0925 11:04:41.577592  167358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0925 11:04:41.587170  167358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0925 11:04:41.667408  167358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0925 11:04:41.733007  167358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0925 11:04:41.742733  167358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 11:04:41.755267  167358 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0925 11:04:41.755324  167358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0925 11:04:41.764330  167358 out.go:177] 
	W0925 11:04:41.765719  167358 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0925 11:04:41.765748  167358 out.go:239] * 
	* 
	W0925 11:04:41.766618  167358 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 11:04:41.768094  167358 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-439109 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (98.40s)

                                                
                                    

Test pass (274/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.36
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.2/json-events 6.44
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.18
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
18 TestDownloadOnlyKic 1.22
19 TestBinaryMirror 0.69
20 TestOffline 84.16
22 TestAddons/Setup 120.07
24 TestAddons/parallel/Registry 16.47
26 TestAddons/parallel/InspektorGadget 11.93
27 TestAddons/parallel/MetricsServer 5.73
28 TestAddons/parallel/HelmTiller 11.32
30 TestAddons/parallel/CSI 93.16
31 TestAddons/parallel/Headlamp 13.71
32 TestAddons/parallel/CloudSpanner 5.83
35 TestAddons/serial/GCPAuth/Namespaces 0.11
36 TestAddons/StoppedEnableDisable 12.05
37 TestCertOptions 29.32
38 TestCertExpiration 219.3
40 TestForceSystemdFlag 33.47
41 TestForceSystemdEnv 27.6
43 TestKVMDriverInstallOrUpdate 2.87
47 TestErrorSpam/setup 24.38
48 TestErrorSpam/start 0.55
49 TestErrorSpam/status 0.81
50 TestErrorSpam/pause 1.44
51 TestErrorSpam/unpause 1.45
52 TestErrorSpam/stop 1.32
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 69.55
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 41.4
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.06
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.73
64 TestFunctional/serial/CacheCmd/cache/add_local 1.14
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.09
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
72 TestFunctional/serial/ExtraConfig 31.42
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 1.29
75 TestFunctional/serial/LogsFileCmd 1.3
76 TestFunctional/serial/InvalidService 3.9
78 TestFunctional/parallel/ConfigCmd 0.34
79 TestFunctional/parallel/DashboardCmd 10.65
80 TestFunctional/parallel/DryRun 0.64
81 TestFunctional/parallel/InternationalLanguage 0.31
82 TestFunctional/parallel/StatusCmd 1.15
86 TestFunctional/parallel/ServiceCmdConnect 8.66
87 TestFunctional/parallel/AddonsCmd 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 29.45
90 TestFunctional/parallel/SSHCmd 0.56
91 TestFunctional/parallel/CpCmd 1.31
92 TestFunctional/parallel/MySQL 21.91
93 TestFunctional/parallel/FileSync 0.25
94 TestFunctional/parallel/CertSync 1.71
98 TestFunctional/parallel/NodeLabels 0.08
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
102 TestFunctional/parallel/License 0.14
103 TestFunctional/parallel/ServiceCmd/DeployApp 10.24
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.37
109 TestFunctional/parallel/ServiceCmd/List 0.52
110 TestFunctional/parallel/Version/short 0.11
111 TestFunctional/parallel/Version/components 1.4
112 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
117 TestFunctional/parallel/ImageCommands/ImageBuild 1.79
118 TestFunctional/parallel/ImageCommands/Setup 0.88
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
120 TestFunctional/parallel/ServiceCmd/Format 0.39
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.3
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/ServiceCmd/URL 0.36
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
133 TestFunctional/parallel/ProfileCmd/profile_list 0.31
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
135 TestFunctional/parallel/MountCmd/any-port 11.35
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.08
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.18
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2
139 TestFunctional/parallel/MountCmd/specific-port 2.69
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.92
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.33
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.29
144 TestFunctional/delete_addon-resizer_images 0.07
145 TestFunctional/delete_my-image_image 0.01
146 TestFunctional/delete_minikube_cached_images 0.01
150 TestIngressAddonLegacy/StartLegacyK8sCluster 62.79
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.3
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.51
157 TestJSONOutput/start/Command 66.79
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.63
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.57
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.72
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.18
182 TestKicCustomNetwork/create_custom_network 31.37
183 TestKicCustomNetwork/use_default_bridge_network 26.53
184 TestKicExistingNetwork 26.44
185 TestKicCustomSubnet 26.49
186 TestKicStaticIP 26.68
187 TestMainNoArgs 0.04
188 TestMinikubeProfile 49.97
191 TestMountStart/serial/StartWithMountFirst 7.83
192 TestMountStart/serial/VerifyMountFirst 0.23
193 TestMountStart/serial/StartWithMountSecond 5.04
194 TestMountStart/serial/VerifyMountSecond 0.23
195 TestMountStart/serial/DeleteFirst 1.59
196 TestMountStart/serial/VerifyMountPostDelete 0.23
197 TestMountStart/serial/Stop 1.18
198 TestMountStart/serial/RestartStopped 6.89
199 TestMountStart/serial/VerifyMountPostStop 0.23
202 TestMultiNode/serial/FreshStart2Nodes 83.75
203 TestMultiNode/serial/DeployApp2Nodes 3.3
205 TestMultiNode/serial/AddNode 50.65
206 TestMultiNode/serial/ProfileList 0.25
207 TestMultiNode/serial/CopyFile 8.44
208 TestMultiNode/serial/StopNode 2.05
209 TestMultiNode/serial/StartAfterStop 10.76
210 TestMultiNode/serial/RestartKeepsNodes 116.47
211 TestMultiNode/serial/DeleteNode 4.6
212 TestMultiNode/serial/StopMultiNode 23.77
213 TestMultiNode/serial/RestartMultiNode 73.81
214 TestMultiNode/serial/ValidateNameConflict 23.19
219 TestPreload 131.72
221 TestScheduledStopUnix 97.76
224 TestInsufficientStorage 10.19
227 TestKubernetesUpgrade 347.38
228 TestMissingContainerUpgrade 175.34
230 TestStoppedBinaryUpgrade/Setup 0.59
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
232 TestNoKubernetes/serial/StartWithK8s 35.83
234 TestNoKubernetes/serial/StartWithStopK8s 9.07
235 TestNoKubernetes/serial/Start 10.88
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
237 TestNoKubernetes/serial/ProfileList 1.47
238 TestNoKubernetes/serial/Stop 1.21
239 TestNoKubernetes/serial/StartNoArgs 9.06
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
248 TestStoppedBinaryUpgrade/MinikubeLogs 0.53
250 TestPause/serial/Start 54.84
251 TestPause/serial/SecondStartNoReconfiguration 38.39
259 TestNetworkPlugins/group/false 3.14
263 TestPause/serial/Pause 1.06
264 TestPause/serial/VerifyStatus 0.31
265 TestPause/serial/Unpause 0.72
266 TestPause/serial/PauseAgain 0.75
267 TestPause/serial/DeletePaused 2.61
268 TestPause/serial/VerifyDeletedResources 15.46
270 TestStartStop/group/old-k8s-version/serial/FirstStart 114.06
272 TestStartStop/group/no-preload/serial/FirstStart 56.61
273 TestStartStop/group/no-preload/serial/DeployApp 7.42
274 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
275 TestStartStop/group/no-preload/serial/Stop 11.92
276 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
277 TestStartStop/group/no-preload/serial/SecondStart 333.27
278 TestStartStop/group/old-k8s-version/serial/DeployApp 8.43
279 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
280 TestStartStop/group/old-k8s-version/serial/Stop 12.04
281 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
282 TestStartStop/group/old-k8s-version/serial/SecondStart 432.94
284 TestStartStop/group/embed-certs/serial/FirstStart 71.06
286 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.29
287 TestStartStop/group/embed-certs/serial/DeployApp 7.34
288 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
289 TestStartStop/group/embed-certs/serial/Stop 11.91
290 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
291 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
292 TestStartStop/group/embed-certs/serial/SecondStart 333.41
293 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.89
295 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
296 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 339.92
297 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.08
298 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
299 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
300 TestStartStop/group/no-preload/serial/Pause 2.53
302 TestStartStop/group/newest-cni/serial/FirstStart 34.34
303 TestStartStop/group/newest-cni/serial/DeployApp 0
304 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
305 TestStartStop/group/newest-cni/serial/Stop 1.2
306 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
307 TestStartStop/group/newest-cni/serial/SecondStart 26.22
308 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
309 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
311 TestStartStop/group/newest-cni/serial/Pause 2.36
312 TestNetworkPlugins/group/auto/Start 39.36
313 TestNetworkPlugins/group/auto/KubeletFlags 0.25
314 TestNetworkPlugins/group/auto/NetCatPod 10.24
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
317 TestNetworkPlugins/group/auto/DNS 0.15
318 TestNetworkPlugins/group/auto/Localhost 0.13
319 TestNetworkPlugins/group/auto/HairPin 0.13
320 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
321 TestStartStop/group/old-k8s-version/serial/Pause 2.64
322 TestNetworkPlugins/group/kindnet/Start 75.17
323 TestNetworkPlugins/group/flannel/Start 48.95
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.02
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
327 TestStartStop/group/embed-certs/serial/Pause 2.82
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.02
329 TestNetworkPlugins/group/enable-default-cni/Start 78.1
330 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
331 TestNetworkPlugins/group/flannel/ControllerPod 5.02
332 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
333 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.62
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
335 TestNetworkPlugins/group/flannel/NetCatPod 10.24
336 TestNetworkPlugins/group/bridge/Start 76.9
337 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
338 TestNetworkPlugins/group/flannel/DNS 0.2
339 TestNetworkPlugins/group/flannel/Localhost 0.17
340 TestNetworkPlugins/group/flannel/HairPin 0.18
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
342 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
343 TestNetworkPlugins/group/kindnet/DNS 0.19
344 TestNetworkPlugins/group/kindnet/Localhost 0.17
345 TestNetworkPlugins/group/kindnet/HairPin 0.18
346 TestNetworkPlugins/group/custom-flannel/Start 62.21
347 TestNetworkPlugins/group/calico/Start 62.62
348 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
349 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
350 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
351 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
352 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
354 TestNetworkPlugins/group/bridge/NetCatPod 10.39
355 TestNetworkPlugins/group/bridge/DNS 0.18
356 TestNetworkPlugins/group/bridge/Localhost 0.18
357 TestNetworkPlugins/group/bridge/HairPin 0.18
358 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
359 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
360 TestNetworkPlugins/group/custom-flannel/DNS 0.18
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
363 TestNetworkPlugins/group/calico/ControllerPod 5.02
364 TestNetworkPlugins/group/calico/KubeletFlags 0.27
365 TestNetworkPlugins/group/calico/NetCatPod 10.3
366 TestNetworkPlugins/group/calico/DNS 0.15
367 TestNetworkPlugins/group/calico/Localhost 0.26
368 TestNetworkPlugins/group/calico/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (7.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-713911 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-713911 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.36322319s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-713911
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-713911: exit status 85 (53.848081ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-713911 | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |          |
	|         | -p download-only-713911        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 10:33:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 10:33:28.467018   12527 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:33:28.467132   12527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:28.467140   12527 out.go:309] Setting ErrFile to fd 2...
	I0925 10:33:28.467145   12527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:28.467338   12527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	W0925 10:33:28.467456   12527 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17297-5744/.minikube/config/config.json: open /home/jenkins/minikube-integration/17297-5744/.minikube/config/config.json: no such file or directory
	I0925 10:33:28.468017   12527 out.go:303] Setting JSON to true
	I0925 10:33:28.468903   12527 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":961,"bootTime":1695637048,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:33:28.468966   12527 start.go:138] virtualization: kvm guest
	I0925 10:33:28.471247   12527 out.go:97] [download-only-713911] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:33:28.472582   12527 out.go:169] MINIKUBE_LOCATION=17297
	W0925 10:33:28.471351   12527 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 10:33:28.471397   12527 notify.go:220] Checking for updates...
	I0925 10:33:28.475244   12527 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:33:28.476484   12527 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:33:28.477762   12527 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 10:33:28.479029   12527 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0925 10:33:28.481100   12527 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 10:33:28.481304   12527 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:33:28.505416   12527 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 10:33:28.505504   12527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:33:28.848812   12527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-09-25 10:33:28.840661678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:33:28.848939   12527 docker.go:294] overlay module found
	I0925 10:33:28.850929   12527 out.go:97] Using the docker driver based on user configuration
	I0925 10:33:28.850959   12527 start.go:298] selected driver: docker
	I0925 10:33:28.850969   12527 start.go:902] validating driver "docker" against <nil>
	I0925 10:33:28.851055   12527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:33:28.909277   12527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-09-25 10:33:28.901645929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:33:28.909466   12527 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 10:33:28.910116   12527 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0925 10:33:28.910321   12527 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 10:33:28.912270   12527 out.go:169] Using Docker driver with root privileges
	I0925 10:33:28.913797   12527 cni.go:84] Creating CNI manager for ""
	I0925 10:33:28.913810   12527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0925 10:33:28.913819   12527 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0925 10:33:28.913831   12527 start_flags.go:321] config:
	{Name:download-only-713911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-713911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:33:28.915361   12527 out.go:97] Starting control plane node download-only-713911 in cluster download-only-713911
	I0925 10:33:28.915372   12527 cache.go:122] Beginning downloading kic base image for docker with crio
	I0925 10:33:28.916667   12527 out.go:97] Pulling base image ...
	I0925 10:33:28.916691   12527 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0925 10:33:28.916798   12527 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0925 10:33:28.931280   12527 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0925 10:33:28.931446   12527 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I0925 10:33:28.931529   12527 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0925 10:33:28.944323   12527 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0925 10:33:28.944341   12527 cache.go:57] Caching tarball of preloaded images
	I0925 10:33:28.944454   12527 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0925 10:33:28.947439   12527 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0925 10:33:28.947460   12527 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0925 10:33:28.979703   12527 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0925 10:33:31.832787   12527 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-713911"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (6.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-713911 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-713911 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.444151527s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (6.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-713911
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-713911: exit status 85 (51.564959ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-713911 | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |          |
	|         | -p download-only-713911        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-713911 | jenkins | v1.31.2 | 25 Sep 23 10:33 UTC |          |
	|         | -p download-only-713911        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 10:33:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 10:33:35.887261   12673 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:33:35.887389   12673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:35.887401   12673 out.go:309] Setting ErrFile to fd 2...
	I0925 10:33:35.887409   12673 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:33:35.887597   12673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	W0925 10:33:35.887717   12673 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17297-5744/.minikube/config/config.json: open /home/jenkins/minikube-integration/17297-5744/.minikube/config/config.json: no such file or directory
	I0925 10:33:35.888140   12673 out.go:303] Setting JSON to true
	I0925 10:33:35.888998   12673 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":968,"bootTime":1695637048,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:33:35.889050   12673 start.go:138] virtualization: kvm guest
	I0925 10:33:35.891966   12673 out.go:97] [download-only-713911] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:33:35.893502   12673 out.go:169] MINIKUBE_LOCATION=17297
	I0925 10:33:35.892101   12673 notify.go:220] Checking for updates...
	I0925 10:33:35.896262   12673 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:33:35.897707   12673 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:33:35.899078   12673 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 10:33:35.900473   12673 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0925 10:33:35.903491   12673 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 10:33:35.904116   12673 config.go:182] Loaded profile config "download-only-713911": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0925 10:33:35.904167   12673 start.go:810] api.Load failed for download-only-713911: filestore "download-only-713911": Docker machine "download-only-713911" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0925 10:33:35.904281   12673 driver.go:373] Setting default libvirt URI to qemu:///system
	W0925 10:33:35.904325   12673 start.go:810] api.Load failed for download-only-713911: filestore "download-only-713911": Docker machine "download-only-713911" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0925 10:33:35.924465   12673 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 10:33:35.924535   12673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:33:35.974851   12673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-25 10:33:35.966398593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:33:35.974949   12673 docker.go:294] overlay module found
	I0925 10:33:35.978181   12673 out.go:97] Using the docker driver based on existing profile
	I0925 10:33:35.978207   12673 start.go:298] selected driver: docker
	I0925 10:33:35.978213   12673 start.go:902] validating driver "docker" against &{Name:download-only-713911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-713911 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:33:35.978387   12673 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:33:36.030403   12673 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-25 10:33:36.022952636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:33:36.030994   12673 cni.go:84] Creating CNI manager for ""
	I0925 10:33:36.031010   12673 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0925 10:33:36.031021   12673 start_flags.go:321] config:
	{Name:download-only-713911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-713911 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:33:36.032963   12673 out.go:97] Starting control plane node download-only-713911 in cluster download-only-713911
	I0925 10:33:36.032977   12673 cache.go:122] Beginning downloading kic base image for docker with crio
	I0925 10:33:36.034315   12673 out.go:97] Pulling base image ...
	I0925 10:33:36.034334   12673 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:33:36.034435   12673 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0925 10:33:36.048839   12673 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0925 10:33:36.048953   12673 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I0925 10:33:36.048968   12673 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I0925 10:33:36.048972   12673 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I0925 10:33:36.048984   12673 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I0925 10:33:36.065615   12673 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0925 10:33:36.065638   12673 cache.go:57] Caching tarball of preloaded images
	I0925 10:33:36.065753   12673 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0925 10:33:36.067622   12673 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I0925 10:33:36.067634   12673 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I0925 10:33:36.103588   12673 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:63ef340a9dae90462e676325aa502af3 -> /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I0925 10:33:40.743421   12673 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I0925 10:33:40.743529   12673 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17297-5744/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-713911"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-713911
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.22s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-747732 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-747732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-747732
--- PASS: TestDownloadOnlyKic (1.22s)

                                                
                                    
x
+
TestBinaryMirror (0.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-732200 --alsologtostderr --binary-mirror http://127.0.0.1:43745 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-732200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-732200
--- PASS: TestBinaryMirror (0.69s)

                                                
                                    
x
+
TestOffline (84.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-313599 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-313599 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m20.531981352s)
helpers_test.go:175: Cleaning up "offline-crio-313599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-313599
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-313599: (3.628884755s)
--- PASS: TestOffline (84.16s)

                                                
                                    
x
+
TestAddons/Setup (120.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-440446 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-440446 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m0.069425493s)
--- PASS: TestAddons/Setup (120.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 10.25402ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-4x2ds" [16f0fa7e-a090-4949-8aaa-1a67f930d55d] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01054509s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-44tvq" [5b8babe6-b83c-4179-92ea-4500aa2dddfb] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.087687991s
addons_test.go:316: (dbg) Run:  kubectl --context addons-440446 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-440446 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-440446 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.603832098s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 ip
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.47s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-k898v" [adf47bf8-bcc9-4ad0-827e-2759556b3988] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.016796181s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-440446
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-440446: (6.910499025s)
--- PASS: TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.089899ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-gf64x" [34885621-909d-433c-8a32-f7e24616c562] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011463574s
addons_test.go:391: (dbg) Run:  kubectl --context addons-440446 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.32s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 10.188644ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-fp8ss" [8008ff6a-2c21-487b-927e-dcbe79881038] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010648627s
addons_test.go:449: (dbg) Run:  kubectl --context addons-440446 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-440446 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.777998101s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p addons-440446 addons disable helm-tiller --alsologtostderr -v=1: (1.513410718s)
--- PASS: TestAddons/parallel/HelmTiller (11.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (93.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 32.438123ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-440446 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/09/25 10:36:00 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-440446 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4055fb80-65de-42f6-a732-89f96ccab45a] Pending
helpers_test.go:344: "task-pv-pod" [4055fb80-65de-42f6-a732-89f96ccab45a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4055fb80-65de-42f6-a732-89f96ccab45a] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.009066132s
addons_test.go:560: (dbg) Run:  kubectl --context addons-440446 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-440446 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-440446 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-440446 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-440446 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-440446 delete pod task-pv-pod: (1.171874843s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-440446 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-440446 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-440446 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-440446 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5774062d-85fd-4ed9-9434-923e054d79b3] Pending
helpers_test.go:344: "task-pv-pod-restore" [5774062d-85fd-4ed9-9434-923e054d79b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5774062d-85fd-4ed9-9434-923e054d79b3] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.009254589s
addons_test.go:602: (dbg) Run:  kubectl --context addons-440446 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-440446 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-440446 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-440446 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.477916582s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-440446 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (93.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-440446 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-440446 --alsologtostderr -v=1: (1.689483007s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-stwnz" [dd9519bb-54c3-42ce-9161-6545a3ac3229] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-stwnz" [dd9519bb-54c3-42ce-9161-6545a3ac3229] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.014340246s
--- PASS: TestAddons/parallel/Headlamp (13.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-tt28p" [d2e8b2b7-142d-4d96-ba4d-74be402045b8] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007402147s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-440446
--- PASS: TestAddons/parallel/CloudSpanner (5.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-440446 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-440446 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.05s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-440446
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-440446: (11.844969648s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-440446
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-440446
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-440446
--- PASS: TestAddons/StoppedEnableDisable (12.05s)

                                                
                                    
x
+
TestCertOptions (29.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-700253 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-700253 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.939522124s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-700253 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-700253 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-700253 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-700253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-700253
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-700253: (3.824521263s)
--- PASS: TestCertOptions (29.32s)

                                                
                                    
x
+
TestCertExpiration (219.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-422595 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-422595 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.599811091s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-422595 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-422595 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.469357988s)
helpers_test.go:175: Cleaning up "cert-expiration-422595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-422595
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-422595: (2.225710346s)
--- PASS: TestCertExpiration (219.30s)

                                                
                                    
x
+
TestForceSystemdFlag (33.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-260053 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-260053 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.153856944s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-260053 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-260053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-260053
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-260053: (5.04965407s)
--- PASS: TestForceSystemdFlag (33.47s)

                                                
                                    
x
+
TestForceSystemdEnv (27.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-322247 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0925 11:05:42.636701   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 11:05:44.738916   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-322247 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.307802789s)
helpers_test.go:175: Cleaning up "force-systemd-env-322247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-322247
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-322247: (2.288043967s)
--- PASS: TestForceSystemdEnv (27.60s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.87s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.87s)

                                                
                                    
x
+
TestErrorSpam/setup (24.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-897241 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-897241 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-897241 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-897241 --driver=docker  --container-runtime=crio: (24.382950759s)
--- PASS: TestErrorSpam/setup (24.38s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 unpause
--- PASS: TestErrorSpam/unpause (1.45s)

                                                
                                    
x
+
TestErrorSpam/stop (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 stop: (1.170104923s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-897241 --log_dir /tmp/nospam-897241 stop
--- PASS: TestErrorSpam/stop (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17297-5744/.minikube/files/etc/test/nested/copy/12516/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104204 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0925 10:40:44.741883   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:44.747697   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:44.758003   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:44.778271   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:44.818513   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:44.898811   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:45.059237   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:45.379783   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:46.020741   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:47.301831   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:40:49.862569   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-104204 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.550256433s)
--- PASS: TestFunctional/serial/StartWithProxy (69.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104204 --alsologtostderr -v=8
E0925 10:40:54.983423   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:41:05.224026   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:41:25.704231   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-104204 --alsologtostderr -v=8: (41.403816729s)
functional_test.go:659: soft start took 41.404485461s for "functional-104204" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-104204 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-104204 /tmp/TestFunctionalserialCacheCmdcacheadd_local3951272760/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 cache add minikube-local-cache-test:functional-104204
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 cache delete minikube-local-cache-test:functional-104204
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-104204
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (248.733658ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 kubectl -- --context functional-104204 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-104204 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104204 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0925 10:42:06.664409   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-104204 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.424030948s)
functional_test.go:757: restart took 31.424155425s for "functional-104204" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-104204 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 logs: (1.286456435s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 logs --file /tmp/TestFunctionalserialLogsFileCmd3503941276/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 logs --file /tmp/TestFunctionalserialLogsFileCmd3503941276/001/logs.txt: (1.296242593s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-104204 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-104204
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-104204: exit status 115 (305.465536ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30350 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-104204 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 config get cpus: exit status 14 (95.356836ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 config get cpus: exit status 14 (53.45257ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-104204 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-104204 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45974: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104204 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-104204 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (299.55689ms)

                                                
                                                
-- stdout --
	* [functional-104204] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 10:42:28.174407   45157 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:42:28.174531   45157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:42:28.174542   45157 out.go:309] Setting ErrFile to fd 2...
	I0925 10:42:28.174552   45157 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:42:28.174721   45157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 10:42:28.175272   45157 out.go:303] Setting JSON to false
	I0925 10:42:28.176479   45157 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1500,"bootTime":1695637048,"procs":561,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:42:28.176542   45157 start.go:138] virtualization: kvm guest
	I0925 10:42:28.202171   45157 out.go:177] * [functional-104204] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 10:42:28.248158   45157 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 10:42:28.248180   45157 notify.go:220] Checking for updates...
	I0925 10:42:28.250800   45157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:42:28.254283   45157 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:42:28.271318   45157 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 10:42:28.284487   45157 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 10:42:28.286604   45157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 10:42:28.288981   45157 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:42:28.289502   45157 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:42:28.313800   45157 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 10:42:28.313901   45157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:42:28.366772   45157 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:50 SystemTime:2023-09-25 10:42:28.357185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architec
ture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:42:28.366907   45157 docker.go:294] overlay module found
	I0925 10:42:28.374204   45157 out.go:177] * Using the docker driver based on existing profile
	I0925 10:42:28.383756   45157 start.go:298] selected driver: docker
	I0925 10:42:28.383773   45157 start.go:902] validating driver "docker" against &{Name:functional-104204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-104204 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:42:28.383862   45157 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 10:42:28.421032   45157 out.go:177] 
	W0925 10:42:28.425237   45157 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0925 10:42:28.426781   45157 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104204 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-104204 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-104204 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (306.708008ms)

                                                
                                                
-- stdout --
	* [functional-104204] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 10:42:28.421006   45254 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:42:28.421161   45254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:42:28.421172   45254 out.go:309] Setting ErrFile to fd 2...
	I0925 10:42:28.421179   45254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:42:28.421607   45254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 10:42:28.426345   45254 out.go:303] Setting JSON to false
	I0925 10:42:28.427956   45254 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1500,"bootTime":1695637048,"procs":559,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 10:42:28.428048   45254 start.go:138] virtualization: kvm guest
	I0925 10:42:28.437800   45254 out.go:177] * [functional-104204] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0925 10:42:28.440917   45254 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 10:42:28.440601   45254 notify.go:220] Checking for updates...
	I0925 10:42:28.443135   45254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 10:42:28.446219   45254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 10:42:28.448106   45254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 10:42:28.502128   45254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 10:42:28.504659   45254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 10:42:28.509970   45254 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:42:28.510429   45254 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 10:42:28.535242   45254 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 10:42:28.535333   45254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:42:28.593950   45254 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:50 SystemTime:2023-09-25 10:42:28.584408978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:42:28.594082   45254 docker.go:294] overlay module found
	I0925 10:42:28.606445   45254 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0925 10:42:28.610678   45254 start.go:298] selected driver: docker
	I0925 10:42:28.610701   45254 start.go:902] validating driver "docker" against &{Name:functional-104204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-104204 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 10:42:28.610854   45254 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 10:42:28.668793   45254 out.go:177] 
	W0925 10:42:28.671714   45254 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0925 10:42:28.680165   45254 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-104204 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-104204 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-9n5p6" [21deadbc-7797-4a69-897d-114edd823e6c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-9n5p6" [21deadbc-7797-4a69-897d-114edd823e6c] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.011478756s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31932
functional_test.go:1674: http://192.168.49.2:31932: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-9n5p6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31932
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3499c66c-697f-4182-97b5-db84a03b304d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.052747718s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-104204 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-104204 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-104204 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-104204 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eae298cc-8984-4a62-b3ee-745d7913e8cf] Pending
helpers_test.go:344: "sp-pod" [eae298cc-8984-4a62-b3ee-745d7913e8cf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eae298cc-8984-4a62-b3ee-745d7913e8cf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.013098372s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-104204 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-104204 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-104204 delete -f testdata/storage-provisioner/pod.yaml: (1.226076529s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-104204 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e537a0d2-02f8-4f4e-a36f-177c45c08456] Pending
helpers_test.go:344: "sp-pod" [e537a0d2-02f8-4f4e-a36f-177c45c08456] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2023/09/25 10:42:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [e537a0d2-02f8-4f4e-a36f-177c45c08456] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.009337801s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-104204 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh -n functional-104204 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 cp functional-104204:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2790376223/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh -n functional-104204 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-104204 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-wt74j" [eec79995-341d-45a7-9963-d913dbcf156d] Pending
helpers_test.go:344: "mysql-859648c796-wt74j" [eec79995-341d-45a7-9963-d913dbcf156d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-wt74j" [eec79995-341d-45a7-9963-d913dbcf156d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.011102075s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-104204 exec mysql-859648c796-wt74j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-104204 exec mysql-859648c796-wt74j -- mysql -ppassword -e "show databases;": exit status 1 (129.219207ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-104204 exec mysql-859648c796-wt74j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12516/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo cat /etc/test/nested/copy/12516/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12516.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo cat /etc/ssl/certs/12516.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12516.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo cat /usr/share/ca-certificates/12516.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/125162.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo cat /etc/ssl/certs/125162.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/125162.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo cat /usr/share/ca-certificates/125162.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-104204 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 ssh "sudo systemctl is-active docker": exit status 1 (358.817783ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 ssh "sudo systemctl is-active containerd": exit status 1 (256.402351ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-104204 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-104204 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-6r26s" [7d5b039b-8144-4bbc-8bfb-52507e1f3b52] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-6r26s" [7d5b039b-8144-4bbc-8bfb-52507e1f3b52] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.018526132s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-104204 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-104204 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-104204 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 43656: os: process already finished
helpers_test.go:502: unable to terminate pid 43346: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-104204 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-104204 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-104204 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8d4bedc0-2e77-4699-a371-8547eddd1623] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8d4bedc0-2e77-4699-a371-8547eddd1623] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.015570871s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 version -o=json --components: (1.398719751s)
--- PASS: TestFunctional/parallel/Version/components (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 service list -o json
functional_test.go:1493: Took "494.541526ms" to run "out/minikube-linux-amd64 -p functional-104204 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104204 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-104204
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104204 image ls --format short --alsologtostderr:
I0925 10:42:53.080526   50239 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:53.080717   50239 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.080728   50239 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:53.080733   50239 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.081020   50239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
I0925 10:42:53.081801   50239 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.081905   50239 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.082329   50239 cli_runner.go:164] Run: docker container inspect functional-104204 --format={{.State.Status}}
I0925 10:42:53.101790   50239 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:53.101842   50239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-104204
I0925 10:42:53.121919   50239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/functional-104204/id_rsa Username:docker}
I0925 10:42:53.213023   50239 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104204 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-104204  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 7a5d9d67a13f6 | 61.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| docker.io/library/nginx                 | latest             | 61395b4c586da | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.2            | c120fed2beb84 | 74.7MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | alpine             | 433dbc17191a7 | 44.4MB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | cdcab12b2dd16 | 127MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 55f13c92defb1 | 123MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104204 image ls --format table --alsologtostderr:
I0925 10:42:53.324701   50432 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:53.324842   50432 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.324852   50432 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:53.324856   50432 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.325104   50432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
I0925 10:42:53.325640   50432 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.325737   50432 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.326155   50432 cli_runner.go:164] Run: docker container inspect functional-104204 --format={{.State.Status}}
I0925 10:42:53.351929   50432 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:53.351982   50432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-104204
I0925 10:42:53.371431   50432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/functional-104204/id_rsa Username:docker}
I0925 10:42:53.460330   50432 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104204 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c0
5ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:b2888fc9cfe7cd9d6727aeb462d13c7c45dec413b66f2819a36c4a3cb9d4df75"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820094"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":["registry.k8s.io/kube
-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded","registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"74687895"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"61485878"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af7
8d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629","repoDigests":["docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70","docker.io/library/nginx@sha256:7ba6006df2033690d8c64bd8df69e4a1957b78e57b4e32141c78d72a5e0de63d"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44389673"},{"id":"f
fd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-104204"],"size":"34114467"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"127149008"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboar
d@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4","registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"123171638"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe
50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104204 image ls --format json --alsologtostderr:
I0925 10:42:53.318080   50421 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:53.318333   50421 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.318343   50421 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:53.318348   50421 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.318545   50421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
I0925 10:42:53.319072   50421 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.319168   50421 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.319712   50421 cli_runner.go:164] Run: docker container inspect functional-104204 --format={{.State.Status}}
I0925 10:42:53.341567   50421 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:53.341613   50421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-104204
I0925 10:42:53.362230   50421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/functional-104204/id_rsa Username:docker}
I0925 10:42:53.452664   50421 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104204 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:b2888fc9cfe7cd9d6727aeb462d13c7c45dec413b66f2819a36c4a3cb9d4df75
repoTags:
- docker.io/library/nginx:latest
size: "190820094"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "61485878"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-104204
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
- registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "123171638"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "127149008"
- id: 433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629
repoDigests:
- docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70
- docker.io/library/nginx@sha256:7ba6006df2033690d8c64bd8df69e4a1957b78e57b4e32141c78d72a5e0de63d
repoTags:
- docker.io/library/nginx:alpine
size: "44389673"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "74687895"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104204 image ls --format yaml --alsologtostderr:
I0925 10:42:53.086786   50241 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:53.086886   50241 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.086897   50241 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:53.086905   50241 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.087214   50241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
I0925 10:42:53.087945   50241 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.088091   50241 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.088618   50241 cli_runner.go:164] Run: docker container inspect functional-104204 --format={{.State.Status}}
I0925 10:42:53.114393   50241 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:53.114451   50241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-104204
I0925 10:42:53.136763   50241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/functional-104204/id_rsa Username:docker}
I0925 10:42:53.224834   50241 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 ssh pgrep buildkitd: exit status 1 (265.11119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image build -t localhost/my-image:functional-104204 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 image build -t localhost/my-image:functional-104204 testdata/build --alsologtostderr: (1.332185107s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-104204 image build -t localhost/my-image:functional-104204 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9fde62746d8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-104204
--> 90e139c9fab
Successfully tagged localhost/my-image:functional-104204
90e139c9fab163ab98a0f9e202377b7ca164ee9424d1dca0e0fcb23c21365690
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-104204 image build -t localhost/my-image:functional-104204 testdata/build --alsologtostderr:
I0925 10:42:53.346823   50449 out.go:296] Setting OutFile to fd 1 ...
I0925 10:42:53.346954   50449 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.346960   50449 out.go:309] Setting ErrFile to fd 2...
I0925 10:42:53.346967   50449 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 10:42:53.347263   50449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
I0925 10:42:53.348441   50449 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.349264   50449 config.go:182] Loaded profile config "functional-104204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0925 10:42:53.349717   50449 cli_runner.go:164] Run: docker container inspect functional-104204 --format={{.State.Status}}
I0925 10:42:53.369012   50449 ssh_runner.go:195] Run: systemctl --version
I0925 10:42:53.369053   50449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-104204
I0925 10:42:53.391277   50449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/functional-104204/id_rsa Username:docker}
I0925 10:42:53.485305   50449 build_images.go:151] Building image from path: /tmp/build.2253330245.tar
I0925 10:42:53.485379   50449 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0925 10:42:53.495753   50449 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2253330245.tar
I0925 10:42:53.499164   50449 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2253330245.tar: stat -c "%s %y" /var/lib/minikube/build/build.2253330245.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2253330245.tar': No such file or directory
I0925 10:42:53.499194   50449 ssh_runner.go:362] scp /tmp/build.2253330245.tar --> /var/lib/minikube/build/build.2253330245.tar (3072 bytes)
I0925 10:42:53.558053   50449 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2253330245
I0925 10:42:53.565588   50449 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2253330245 -xf /var/lib/minikube/build/build.2253330245.tar
I0925 10:42:53.573348   50449 crio.go:297] Building image: /var/lib/minikube/build/build.2253330245
I0925 10:42:53.573409   50449 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-104204 /var/lib/minikube/build/build.2253330245 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0925 10:42:54.614724   50449 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-104204 /var/lib/minikube/build/build.2253330245 --cgroup-manager=cgroupfs: (1.04128542s)
I0925 10:42:54.614783   50449 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2253330245
I0925 10:42:54.622435   50449 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2253330245.tar
I0925 10:42:54.629424   50449 build_images.go:207] Built localhost/my-image:functional-104204 from /tmp/build.2253330245.tar
I0925 10:42:54.629450   50449 build_images.go:123] succeeded building to: functional-104204
I0925 10:42:54.629454   50449 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-104204
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30120
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image load --daemon gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 image load --daemon gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr: (5.072072938s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-104204 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.92.237 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-104204 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30120
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "269.984119ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "38.191674ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "274.491575ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "38.573281ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdany-port3816850729/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695638551433082143" to /tmp/TestFunctionalparallelMountCmdany-port3816850729/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695638551433082143" to /tmp/TestFunctionalparallelMountCmdany-port3816850729/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695638551433082143" to /tmp/TestFunctionalparallelMountCmdany-port3816850729/001/test-1695638551433082143
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.083778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 25 10:42 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 25 10:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 25 10:42 test-1695638551433082143
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh cat /mount-9p/test-1695638551433082143
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-104204 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4f2251a9-f4cd-47d1-aa2e-98afc06b465e] Pending
helpers_test.go:344: "busybox-mount" [4f2251a9-f4cd-47d1-aa2e-98afc06b465e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4f2251a9-f4cd-47d1-aa2e-98afc06b465e] Running
helpers_test.go:344: "busybox-mount" [4f2251a9-f4cd-47d1-aa2e-98afc06b465e] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4f2251a9-f4cd-47d1-aa2e-98afc06b465e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.010209907s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-104204 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdany-port3816850729/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image load --daemon gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 image load --daemon gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr: (2.884686685s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-104204
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image load --daemon gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 image load --daemon gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr: (4.212382178s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image save gcr.io/google-containers/addon-resizer:functional-104204 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 image save gcr.io/google-containers/addon-resizer:functional-104204 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.001112724s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdspecific-port379004879/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (470.174486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdspecific-port379004879/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 ssh "sudo umount -f /mount-9p": exit status 1 (485.390752ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-104204 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdspecific-port379004879/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image rm gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup493022370/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup493022370/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup493022370/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T" /mount1: exit status 1 (580.577455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-104204 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup493022370/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup493022370/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-104204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup493022370/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-104204
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-104204 image save --daemon gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-104204 image save --daemon gcr.io/google-containers/addon-resizer:functional-104204 --alsologtostderr: (1.252625719s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-104204
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-104204
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-104204
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-104204
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (62.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-260900 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0925 10:43:28.585006   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-260900 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m2.788442339s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (62.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-260900 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-260900 addons enable ingress --alsologtostderr -v=5: (10.300150438s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-260900 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-073423 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0925 10:47:26.414548   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:36.654853   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:47:57.135983   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-073423 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.788625102s)
--- PASS: TestJSONOutput/start/Command (66.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-073423 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-073423 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-073423 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-073423 --output=json --user=testUser: (5.720596545s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-140618 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-140618 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.55219ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b1de29ad-da73-467d-bfcc-1ba46b25de35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-140618] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d69c08d-fd76-4f1a-8950-827dffcae5f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17297"}}
	{"specversion":"1.0","id":"c83d259e-1cd0-4475-aacd-748ad8f47de1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"891d2f90-f504-41de-ae1f-fca1d72eb575","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig"}}
	{"specversion":"1.0","id":"7a56403f-0944-4fe2-9dc9-4f9866d9d94e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube"}}
	{"specversion":"1.0","id":"a3502490-0160-437a-b4d7-0f9419b7908e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"276fdf22-5a6f-4392-9ca0-805d978ba28f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9972f93c-cbc0-43a7-b43b-20cf0df1dd44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-140618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-140618
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-688207 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-688207 --network=: (29.33959251s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-688207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-688207
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-688207: (2.012731103s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.37s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-757195 --network=bridge
E0925 10:49:19.593900   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:19.599249   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:19.609509   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:19.629798   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:19.670055   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:19.750380   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:19.910819   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:20.231365   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:20.872305   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:22.152746   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:24.712952   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:49:29.833110   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-757195 --network=bridge: (24.627684939s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-757195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-757195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-757195: (1.889677972s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.53s)

                                                
                                    
x
+
TestKicExistingNetwork (26.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-761967 --network=existing-network
E0925 10:49:40.073419   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:50:00.017277   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:50:00.554117   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-761967 --network=existing-network: (24.409283844s)
helpers_test.go:175: Cleaning up "existing-network-761967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-761967
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-761967: (1.9059928s)
--- PASS: TestKicExistingNetwork (26.44s)

                                                
                                    
x
+
TestKicCustomSubnet (26.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-068933 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-068933 --subnet=192.168.60.0/24: (24.443963397s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-068933 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-068933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-068933
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-068933: (2.03296481s)
--- PASS: TestKicCustomSubnet (26.49s)

                                                
                                    
x
+
TestKicStaticIP (26.68s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-057635 --static-ip=192.168.200.200
E0925 10:50:41.514411   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
E0925 10:50:44.739100   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-057635 --static-ip=192.168.200.200: (24.604490006s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-057635 ip
helpers_test.go:175: Cleaning up "static-ip-057635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-057635
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-057635: (1.966394361s)
--- PASS: TestKicStaticIP (26.68s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-108862 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-108862 --driver=docker  --container-runtime=crio: (21.41634015s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-111029 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-111029 --driver=docker  --container-runtime=crio: (23.639810134s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-108862
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-111029
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-111029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-111029
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-111029: (1.804876673s)
helpers_test.go:175: Cleaning up "first-108862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-108862
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-108862: (2.189230186s)
--- PASS: TestMinikubeProfile (49.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-909279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-909279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.832674298s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-909279 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-928828 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-928828 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.034969164s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-928828 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-909279 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-909279 --alsologtostderr -v=5: (1.593148513s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-928828 ssh -- ls /minikube-host
E0925 10:52:03.435027   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-928828
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-928828: (1.180428474s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-928828
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-928828: (5.886180742s)
--- PASS: TestMountStart/serial/RestartStopped (6.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-928828 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-529126 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0925 10:52:16.174085   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 10:52:43.858314   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-529126 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m23.318646945s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-529126 -- rollout status deployment/busybox: (1.738723073s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-6xmht -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-jnhqs -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-6xmht -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-jnhqs -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-6xmht -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-529126 -- exec busybox-5bc68d56bd-jnhqs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.30s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-529126 -v 3 --alsologtostderr
E0925 10:54:19.593420   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-529126 -v 3 --alsologtostderr: (50.076898058s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.65s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp testdata/cp-test.txt multinode-529126:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1736163963/001/cp-test_multinode-529126.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126:/home/docker/cp-test.txt multinode-529126-m02:/home/docker/cp-test_multinode-529126_multinode-529126-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m02 "sudo cat /home/docker/cp-test_multinode-529126_multinode-529126-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126:/home/docker/cp-test.txt multinode-529126-m03:/home/docker/cp-test_multinode-529126_multinode-529126-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m03 "sudo cat /home/docker/cp-test_multinode-529126_multinode-529126-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp testdata/cp-test.txt multinode-529126-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1736163963/001/cp-test_multinode-529126-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126-m02:/home/docker/cp-test.txt multinode-529126:/home/docker/cp-test_multinode-529126-m02_multinode-529126.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126 "sudo cat /home/docker/cp-test_multinode-529126-m02_multinode-529126.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126-m02:/home/docker/cp-test.txt multinode-529126-m03:/home/docker/cp-test_multinode-529126-m02_multinode-529126-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m03 "sudo cat /home/docker/cp-test_multinode-529126-m02_multinode-529126-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp testdata/cp-test.txt multinode-529126-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1736163963/001/cp-test_multinode-529126-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126-m03:/home/docker/cp-test.txt multinode-529126:/home/docker/cp-test_multinode-529126-m03_multinode-529126.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126 "sudo cat /home/docker/cp-test_multinode-529126-m03_multinode-529126.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 cp multinode-529126-m03:/home/docker/cp-test.txt multinode-529126-m02:/home/docker/cp-test_multinode-529126-m03_multinode-529126-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 ssh -n multinode-529126-m02 "sudo cat /home/docker/cp-test_multinode-529126-m03_multinode-529126-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-529126 node stop m03: (1.175503246s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-529126 status: exit status 7 (434.802296ms)

                                                
                                                
-- stdout --
	multinode-529126
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-529126-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-529126-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-529126 status --alsologtostderr: exit status 7 (441.342666ms)

                                                
                                                
-- stdout --
	multinode-529126
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-529126-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-529126-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 10:54:44.667386  110097 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:54:44.667639  110097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:54:44.667650  110097 out.go:309] Setting ErrFile to fd 2...
	I0925 10:54:44.667656  110097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:54:44.667836  110097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 10:54:44.668002  110097 out.go:303] Setting JSON to false
	I0925 10:54:44.668036  110097 mustload.go:65] Loading cluster: multinode-529126
	I0925 10:54:44.668132  110097 notify.go:220] Checking for updates...
	I0925 10:54:44.668515  110097 config.go:182] Loaded profile config "multinode-529126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:54:44.668532  110097 status.go:255] checking status of multinode-529126 ...
	I0925 10:54:44.668977  110097 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Status}}
	I0925 10:54:44.686316  110097 status.go:330] multinode-529126 host status = "Running" (err=<nil>)
	I0925 10:54:44.686342  110097 host.go:66] Checking if "multinode-529126" exists ...
	I0925 10:54:44.686567  110097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-529126
	I0925 10:54:44.702633  110097 host.go:66] Checking if "multinode-529126" exists ...
	I0925 10:54:44.702869  110097 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 10:54:44.702908  110097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126
	I0925 10:54:44.718335  110097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126/id_rsa Username:docker}
	I0925 10:54:44.809504  110097 ssh_runner.go:195] Run: systemctl --version
	I0925 10:54:44.813200  110097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:54:44.823197  110097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 10:54:44.877130  110097 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2023-09-25 10:54:44.868095678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 10:54:44.877915  110097 kubeconfig.go:92] found "multinode-529126" server: "https://192.168.58.2:8443"
	I0925 10:54:44.877942  110097 api_server.go:166] Checking apiserver status ...
	I0925 10:54:44.877976  110097 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 10:54:44.887576  110097 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	I0925 10:54:44.895726  110097 api_server.go:182] apiserver freezer: "12:freezer:/docker/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/crio/crio-5013735c0b755577295ab8cade8dc5d68a421efa8e7481ed68d0f66a4e08455a"
	I0925 10:54:44.895774  110097 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6e3734cea073519e90a2c10bb6c835ae434fe2891a64f316b0a297aecd57d5d5/crio/crio-5013735c0b755577295ab8cade8dc5d68a421efa8e7481ed68d0f66a4e08455a/freezer.state
	I0925 10:54:44.903298  110097 api_server.go:204] freezer state: "THAWED"
	I0925 10:54:44.903330  110097 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0925 10:54:44.907246  110097 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0925 10:54:44.907269  110097 status.go:421] multinode-529126 apiserver status = Running (err=<nil>)
	I0925 10:54:44.907277  110097 status.go:257] multinode-529126 status: &{Name:multinode-529126 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 10:54:44.907291  110097 status.go:255] checking status of multinode-529126-m02 ...
	I0925 10:54:44.907516  110097 cli_runner.go:164] Run: docker container inspect multinode-529126-m02 --format={{.State.Status}}
	I0925 10:54:44.924193  110097 status.go:330] multinode-529126-m02 host status = "Running" (err=<nil>)
	I0925 10:54:44.924222  110097 host.go:66] Checking if "multinode-529126-m02" exists ...
	I0925 10:54:44.924457  110097 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-529126-m02
	I0925 10:54:44.940684  110097 host.go:66] Checking if "multinode-529126-m02" exists ...
	I0925 10:54:44.940954  110097 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 10:54:44.940986  110097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-529126-m02
	I0925 10:54:44.958176  110097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17297-5744/.minikube/machines/multinode-529126-m02/id_rsa Username:docker}
	I0925 10:54:45.045613  110097 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 10:54:45.055794  110097 status.go:257] multinode-529126-m02 status: &{Name:multinode-529126-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0925 10:54:45.055822  110097 status.go:255] checking status of multinode-529126-m03 ...
	I0925 10:54:45.056095  110097 cli_runner.go:164] Run: docker container inspect multinode-529126-m03 --format={{.State.Status}}
	I0925 10:54:45.071555  110097 status.go:330] multinode-529126-m03 host status = "Stopped" (err=<nil>)
	I0925 10:54:45.071578  110097 status.go:343] host is not running, skipping remaining checks
	I0925 10:54:45.071586  110097 status.go:257] multinode-529126-m03 status: &{Name:multinode-529126-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 node start m03 --alsologtostderr
E0925 10:54:47.275922   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-529126 node start m03 --alsologtostderr: (10.109412016s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-529126
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-529126
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-529126: (24.812963034s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-529126 --wait=true -v=8 --alsologtostderr
E0925 10:55:44.738988   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-529126 --wait=true -v=8 --alsologtostderr: (1m31.578467827s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-529126
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-529126 node delete m03: (4.049462146s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 stop
E0925 10:57:07.787174   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
E0925 10:57:16.174970   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-529126 stop: (23.633178967s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-529126 status: exit status 7 (68.369882ms)

                                                
                                                
-- stdout --
	multinode-529126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-529126-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-529126 status --alsologtostderr: exit status 7 (71.353045ms)

                                                
                                                
-- stdout --
	multinode-529126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-529126-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 10:57:20.632522  120383 out.go:296] Setting OutFile to fd 1 ...
	I0925 10:57:20.632680  120383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:57:20.632689  120383 out.go:309] Setting ErrFile to fd 2...
	I0925 10:57:20.632697  120383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 10:57:20.632895  120383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 10:57:20.633070  120383 out.go:303] Setting JSON to false
	I0925 10:57:20.633099  120383 mustload.go:65] Loading cluster: multinode-529126
	I0925 10:57:20.633132  120383 notify.go:220] Checking for updates...
	I0925 10:57:20.634500  120383 config.go:182] Loaded profile config "multinode-529126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 10:57:20.634520  120383 status.go:255] checking status of multinode-529126 ...
	I0925 10:57:20.634921  120383 cli_runner.go:164] Run: docker container inspect multinode-529126 --format={{.State.Status}}
	I0925 10:57:20.652579  120383 status.go:330] multinode-529126 host status = "Stopped" (err=<nil>)
	I0925 10:57:20.652620  120383 status.go:343] host is not running, skipping remaining checks
	I0925 10:57:20.652629  120383 status.go:257] multinode-529126 status: &{Name:multinode-529126 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 10:57:20.652668  120383 status.go:255] checking status of multinode-529126-m02 ...
	I0925 10:57:20.652930  120383 cli_runner.go:164] Run: docker container inspect multinode-529126-m02 --format={{.State.Status}}
	I0925 10:57:20.669681  120383 status.go:330] multinode-529126-m02 host status = "Stopped" (err=<nil>)
	I0925 10:57:20.669700  120383 status.go:343] host is not running, skipping remaining checks
	I0925 10:57:20.669706  120383 status.go:257] multinode-529126-m02 status: &{Name:multinode-529126-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (73.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-529126 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-529126 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.239947629s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-529126 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (73.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-529126
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-529126-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-529126-m02 --driver=docker  --container-runtime=crio: exit status 14 (57.387952ms)

                                                
                                                
-- stdout --
	* [multinode-529126-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-529126-m02' is duplicated with machine name 'multinode-529126-m02' in profile 'multinode-529126'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-529126-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-529126-m03 --driver=docker  --container-runtime=crio: (21.032131978s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-529126
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-529126: exit status 80 (249.612442ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-529126
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-529126-m03 already exists in multinode-529126-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-529126-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-529126-m03: (1.81357124s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.19s)

                                                
                                    
x
+
TestPreload (131.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-399759 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0925 10:59:19.594268   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-399759 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m17.526419365s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-399759 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-399759
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-399759: (5.711904572s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-399759 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0925 11:00:44.738767   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-399759 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (45.16568854s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-399759 image list
helpers_test.go:175: Cleaning up "test-preload-399759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-399759
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-399759: (2.259835985s)
--- PASS: TestPreload (131.72s)

                                                
                                    
x
+
TestScheduledStopUnix (97.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-912989 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-912989 --memory=2048 --driver=docker  --container-runtime=crio: (21.870843642s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912989 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-912989 -n scheduled-stop-912989
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912989 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912989 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912989 -n scheduled-stop-912989
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-912989
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-912989 --schedule 15s
E0925 11:02:16.175149   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-912989
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-912989: exit status 7 (52.422785ms)

                                                
                                                
-- stdout --
	scheduled-stop-912989
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912989 -n scheduled-stop-912989
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-912989 -n scheduled-stop-912989: exit status 7 (54.583619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-912989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-912989
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-912989: (4.704417016s)
--- PASS: TestScheduledStopUnix (97.76s)

                                                
                                    
x
+
TestInsufficientStorage (10.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-877772 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-877772 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.892683072s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1b2be32c-aad7-410e-8e80-41ae96db16e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-877772] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d691bf9-8f6c-4d98-b08e-19aef92c5c34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17297"}}
	{"specversion":"1.0","id":"1155fee2-7a0d-4592-a183-ff9241ac3db1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"68c76316-66a4-46e4-b1d0-bfd80270ce96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig"}}
	{"specversion":"1.0","id":"c71f28cd-1a5c-4604-a3f4-50818515d8dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube"}}
	{"specversion":"1.0","id":"bbde5144-671b-43c1-935b-d31d3d94f6f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3f28d2e2-73ab-4377-b55a-e6669c631b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"43a57a89-c992-4b80-9f2c-e45874b38b58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b0f73c72-52e2-4c7e-957d-b4be8d923556","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"51f6416c-8de0-4505-8231-c92b5b114e1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"65f37194-92f9-4612-af7e-b022af7877fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8cb5d0f8-178d-404e-9dd2-61a92b8abd09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-877772 in cluster insufficient-storage-877772","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac167b1a-51fa-4811-b28a-25557df2a164","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fcd5e27-b6bc-4168-bfc3-a8276ee41178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"190fc0b9-2a25-4a35-bf91-a14c32309b26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-877772 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-877772 --output=json --layout=cluster: exit status 7 (244.711437ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-877772","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-877772","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0925 11:03:00.728771  141820 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-877772" does not appear in /home/jenkins/minikube-integration/17297-5744/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-877772 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-877772 --output=json --layout=cluster: exit status 7 (241.0913ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-877772","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-877772","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0925 11:03:00.970301  141911 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-877772" does not appear in /home/jenkins/minikube-integration/17297-5744/kubeconfig
	E0925 11:03:00.979249  141911 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/insufficient-storage-877772/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-877772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-877772
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-877772: (1.810873235s)
--- PASS: TestInsufficientStorage (10.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-419091 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0925 11:04:19.593897   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-419091 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.36099228s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-419091
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-419091: (1.244463449s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-419091 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-419091 status --format={{.Host}}: exit status 7 (101.049101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-419091 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-419091 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.633846248s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-419091 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-419091 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-419091 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (349.637957ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-419091] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-419091
	    minikube start -p kubernetes-upgrade-419091 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4190912 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-419091 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-419091 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-419091 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.537335847s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-419091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-419091
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-419091: (2.091615772s)
--- PASS: TestKubernetesUpgrade (347.38s)

                                                
                                    
x
+
TestMissingContainerUpgrade (175.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2872735345.exe start -p missing-upgrade-504815 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2872735345.exe start -p missing-upgrade-504815 --memory=2200 --driver=docker  --container-runtime=crio: (1m28.570696311s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-504815
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-504815
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-504815 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-504815 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m23.265949216s)
helpers_test.go:175: Cleaning up "missing-upgrade-504815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-504815
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-504815: (2.395404617s)
--- PASS: TestMissingContainerUpgrade (175.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-353345 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-353345 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (65.192286ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-353345] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-353345 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-353345 --driver=docker  --container-runtime=crio: (35.526623252s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-353345 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-353345 --no-kubernetes --driver=docker  --container-runtime=crio
E0925 11:03:39.219085   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-353345 --no-kubernetes --driver=docker  --container-runtime=crio: (6.749717357s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-353345 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-353345 status -o json: exit status 2 (302.326226ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-353345","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-353345
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-353345: (2.018256249s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-353345 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-353345 --no-kubernetes --driver=docker  --container-runtime=crio: (10.883249633s)
--- PASS: TestNoKubernetes/serial/Start (10.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-353345 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-353345 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.283836ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-353345
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-353345: (1.207094005s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-353345 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-353345 --driver=docker  --container-runtime=crio: (9.061458032s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-353345 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-353345 "sudo systemctl is-active --quiet service kubelet": exit status 1 (393.496048ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-439109
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.53s)

                                                
                                    
x
+
TestPause/serial/Start (54.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-768463 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-768463 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.837742887s)
--- PASS: TestPause/serial/Start (54.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-768463 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-768463 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.353823724s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-269116 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-269116 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (191.030661ms)

                                                
                                                
-- stdout --
	* [false-269116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17297
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 11:06:01.488010  188220 out.go:296] Setting OutFile to fd 1 ...
	I0925 11:06:01.488305  188220 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:06:01.488316  188220 out.go:309] Setting ErrFile to fd 2...
	I0925 11:06:01.488323  188220 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 11:06:01.488623  188220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17297-5744/.minikube/bin
	I0925 11:06:01.489364  188220 out.go:303] Setting JSON to false
	I0925 11:06:01.490942  188220 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2914,"bootTime":1695637048,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0925 11:06:01.491020  188220 start.go:138] virtualization: kvm guest
	I0925 11:06:01.493567  188220 out.go:177] * [false-269116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0925 11:06:01.495553  188220 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 11:06:01.496988  188220 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 11:06:01.495624  188220 notify.go:220] Checking for updates...
	I0925 11:06:01.499892  188220 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17297-5744/kubeconfig
	I0925 11:06:01.501258  188220 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17297-5744/.minikube
	I0925 11:06:01.502556  188220 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0925 11:06:01.503935  188220 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 11:06:01.505595  188220 config.go:182] Loaded profile config "force-systemd-env-322247": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 11:06:01.505699  188220 config.go:182] Loaded profile config "kubernetes-upgrade-419091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 11:06:01.505833  188220 config.go:182] Loaded profile config "pause-768463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0925 11:06:01.505926  188220 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 11:06:01.540764  188220 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0925 11:06:01.540870  188220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0925 11:06:01.611409  188220 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:66 SystemTime:2023-09-25 11:06:01.600975153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1042-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0925 11:06:01.611537  188220 docker.go:294] overlay module found
	I0925 11:06:01.613476  188220 out.go:177] * Using the docker driver based on user configuration
	I0925 11:06:01.614875  188220 start.go:298] selected driver: docker
	I0925 11:06:01.614894  188220 start.go:902] validating driver "docker" against <nil>
	I0925 11:06:01.614908  188220 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 11:06:01.617222  188220 out.go:177] 
	W0925 11:06:01.618760  188220 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0925 11:06:01.620119  188220 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-269116 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-269116" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:05:14 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-419091
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:05:24 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-768463
contexts:
- context:
cluster: kubernetes-upgrade-419091
user: kubernetes-upgrade-419091
name: kubernetes-upgrade-419091
- context:
cluster: pause-768463
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:05:24 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-768463
name: pause-768463
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-419091
user:
client-certificate: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/kubernetes-upgrade-419091/client.crt
client-key: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/kubernetes-upgrade-419091/client.key
- name: pause-768463
user:
client-certificate: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/pause-768463/client.crt
client-key: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/pause-768463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-269116

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-269116"

                                                
                                                
----------------------- debugLogs end: false-269116 [took: 2.822337126s] --------------------------------
helpers_test.go:175: Cleaning up "false-269116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-269116
--- PASS: TestNetworkPlugins/group/false (3.14s)

                                                
                                    
x
+
TestPause/serial/Pause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-768463 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-768463 --alsologtostderr -v=5: (1.05605898s)
--- PASS: TestPause/serial/Pause (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-768463 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-768463 --output=json --layout=cluster: exit status 2 (307.2382ms)

                                                
                                                
-- stdout --
	{"Name":"pause-768463","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-768463","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-768463 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-768463 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-768463 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-768463 --alsologtostderr -v=5: (2.608330633s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.409188036s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-768463
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-768463: exit status 1 (14.668765ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-768463: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (114.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-880967 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-880967 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m54.062079079s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (114.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-234928 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E0925 11:07:16.174814   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-234928 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (56.613392249s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-234928 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [404f50d4-5cb1-495a-bda4-f267a217bf09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [404f50d4-5cb1-495a-bda4-f267a217bf09] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.014853731s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-234928 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-234928 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-234928 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-234928 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-234928 --alsologtostderr -v=3: (11.917579409s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-234928 -n no-preload-234928
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-234928 -n no-preload-234928: exit status 7 (56.457108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-234928 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (333.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-234928 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-234928 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m32.84153796s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-234928 -n no-preload-234928
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (333.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-880967 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [49284c84-d600-44f2-8270-421aaaa0858d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [49284c84-d600-44f2-8270-421aaaa0858d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013334222s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-880967 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-880967 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-880967 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-880967 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-880967 --alsologtostderr -v=3: (12.036176195s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-880967 -n old-k8s-version-880967
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-880967 -n old-k8s-version-880967: exit status 7 (58.026937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-880967 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (432.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-880967 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0925 11:09:19.594069   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/ingress-addon-legacy-260900/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-880967 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m12.620268735s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-880967 -n old-k8s-version-880967
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (432.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-125634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-125634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m11.062758114s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-603019 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E0925 11:10:44.739290   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-603019 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m11.293732901s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-125634 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb4a6a3b-acf4-4d99-9647-a6e360aab243] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb4a6a3b-acf4-4d99-9647-a6e360aab243] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.015004465s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-125634 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-125634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-125634 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-125634 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-125634 --alsologtostderr -v=3: (11.906131347s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-603019 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [36f3570b-e025-4e43-a1df-1247280d1ac5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [36f3570b-e025-4e43-a1df-1247280d1ac5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.016999869s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-603019 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-125634 -n embed-certs-125634
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-125634 -n embed-certs-125634: exit status 7 (56.537908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-125634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (333.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-125634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-125634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m33.049641891s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-125634 -n embed-certs-125634
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (333.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-603019 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-603019 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-603019 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-603019 --alsologtostderr -v=3: (11.894232691s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019: exit status 7 (56.515475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-603019 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-603019 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E0925 11:12:16.175118   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/functional-104204/client.crt: no such file or directory
E0925 11:13:47.787699   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-603019 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m39.622480149s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pczhd" [f6761ac8-6bac-4ab6-b3dc-db5aaba8312f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pczhd" [f6761ac8-6bac-4ab6-b3dc-db5aaba8312f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.0765596s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pczhd" [f6761ac8-6bac-4ab6-b3dc-db5aaba8312f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009453484s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-234928 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-234928 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-234928 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-234928 -n no-preload-234928
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-234928 -n no-preload-234928: exit status 2 (271.803925ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-234928 -n no-preload-234928
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-234928 -n no-preload-234928: exit status 2 (278.24809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-234928 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-234928 -n no-preload-234928
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-234928 -n no-preload-234928
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-180140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-180140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (34.339460692s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-180140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-180140 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-180140 --alsologtostderr -v=3: (1.204744029s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-180140 -n newest-cni-180140
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-180140 -n newest-cni-180140: exit status 7 (62.511593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-180140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-180140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-180140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (25.931836927s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-180140 -n newest-cni-180140
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-180140 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-180140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-180140 -n newest-cni-180140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-180140 -n newest-cni-180140: exit status 2 (271.404121ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-180140 -n newest-cni-180140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-180140 -n newest-cni-180140: exit status 2 (284.031008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-180140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-180140 -n newest-cni-180140
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-180140 -n newest-cni-180140
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0925 11:15:44.738758   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/addons-440446/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.356469311s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-269116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-269116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l7pbp" [eedff109-416f-4e08-9044-6f6b4a6b9e02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-l7pbp" [eedff109-416f-4e08-9044-6f6b4a6b9e02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007466624s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6s2x7" [ffd91389-fb12-4478-9eca-689b7616d70f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014426866s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6s2x7" [ffd91389-fb12-4478-9eca-689b7616d70f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00791343s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-880967 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-269116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-880967 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-880967 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-880967 -n old-k8s-version-880967
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-880967 -n old-k8s-version-880967: exit status 2 (287.475286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-880967 -n old-k8s-version-880967
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-880967 -n old-k8s-version-880967: exit status 2 (278.334171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-880967 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-880967 -n old-k8s-version-880967
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-880967 -n old-k8s-version-880967
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m15.170894402s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.947667306s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nxwgf" [790d6e69-73ab-4fb2-9c4f-c397a3510d8f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nxwgf" [790d6e69-73ab-4fb2-9c4f-c397a3510d8f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.018807407s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nxwgf" [790d6e69-73ab-4fb2-9c4f-c397a3510d8f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009292934s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-125634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-125634 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-125634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-125634 -n embed-certs-125634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-125634 -n embed-certs-125634: exit status 2 (296.055042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-125634 -n embed-certs-125634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-125634 -n embed-certs-125634: exit status 2 (310.118469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-125634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-125634 -n embed-certs-125634
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-125634 -n embed-certs-125634
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-grk58" [dac576ab-df96-4a98-a107-e4a412600749] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-grk58" [dac576ab-df96-4a98-a107-e4a412600749] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.01535734s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m18.099616165s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-grk58" [dac576ab-df96-4a98-a107-e4a412600749] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009792668s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-603019 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xq8fm" [54370b98-72bd-496c-a097-838f97def60a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.018083904s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-603019 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-603019 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019: exit status 2 (293.587999ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019: exit status 2 (272.199484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-603019 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-603019 -n default-k8s-diff-port-603019
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-269116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-269116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5nxns" [7046c4a8-da96-43bc-9337-6296b399439c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5nxns" [7046c4a8-da96-43bc-9337-6296b399439c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008768952s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.897604397s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-jd8kq" [4e5c18de-6b6a-43dc-b769-bfb0b6977b68] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018211001s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-269116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-269116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-269116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g67qq" [bee3771d-0ed8-429e-a0b9-ecb0f08ca9e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g67qq" [bee3771d-0ed8-429e-a0b9-ecb0f08ca9e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.013892366s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-269116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0925 11:18:04.903040   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:04.908324   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:04.918633   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:04.938958   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:04.979495   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:05.059679   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:05.219844   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:05.540181   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:06.180324   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:07.462241   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:10.022859   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
E0925 11:18:15.144013   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.210663887s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0925 11:18:25.384234   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-269116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.616009804s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-269116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-269116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fj2d2" [3862c293-fbfb-4f6f-974f-cc1e766413d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0925 11:18:38.402485   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:38.407740   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:38.417961   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:38.438238   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:38.478517   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:38.558861   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:38.719346   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:39.039931   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:39.680998   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:40.961519   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fj2d2" [3862c293-fbfb-4f6f-974f-cc1e766413d8] Running
E0925 11:18:43.522338   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
E0925 11:18:45.865096   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.008722704s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-269116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-269116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-269116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8cb29" [685fe200-1cae-4db0-a74a-019b9b0eb79e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8cb29" [685fe200-1cae-4db0-a74a-019b9b0eb79e] Running
E0925 11:18:58.884228   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/old-k8s-version-880967/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010547697s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-269116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-269116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-269116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fwdcc" [f62e8711-1fff-4677-b43e-26108b7cbc07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fwdcc" [f62e8711-1fff-4677-b43e-26108b7cbc07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.097121783s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-269116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-s45rh" [2d5a8057-23a6-4e35-8c17-53b52db0a57f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019939208s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-269116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-269116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wtkqt" [69662e47-9e5a-4785-8474-b6c6d077e8bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0925 11:19:26.826060   12516 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/no-preload-234928/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-wtkqt" [69662e47-9e5a-4785-8474-b6c6d077e8bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.008365029s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-269116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-269116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    

Test skip (24/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-505911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-505911
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-269116 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-269116" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:05:14 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-419091
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:05:24 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-768463
contexts:
- context:
cluster: kubernetes-upgrade-419091
user: kubernetes-upgrade-419091
name: kubernetes-upgrade-419091
- context:
cluster: pause-768463
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:05:24 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-768463
name: pause-768463
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-419091
user:
client-certificate: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/kubernetes-upgrade-419091/client.crt
client-key: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/kubernetes-upgrade-419091/client.key
- name: pause-768463
user:
client-certificate: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/pause-768463/client.crt
client-key: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/pause-768463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-269116

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-269116"

                                                
                                                
----------------------- debugLogs end: kubenet-269116 [took: 3.127610486s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-269116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-269116
--- SKIP: TestNetworkPlugins/group/kubenet (3.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-269116 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-269116" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:06:06 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-env-322247
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:05:14 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-419091
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17297-5744/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:06:04 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-768463
contexts:
- context:
cluster: force-systemd-env-322247
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:06:06 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: force-systemd-env-322247
name: force-systemd-env-322247
- context:
cluster: kubernetes-upgrade-419091
user: kubernetes-upgrade-419091
name: kubernetes-upgrade-419091
- context:
cluster: pause-768463
extensions:
- extension:
last-update: Mon, 25 Sep 2023 11:06:04 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-768463
name: pause-768463
current-context: force-systemd-env-322247
kind: Config
preferences: {}
users:
- name: force-systemd-env-322247
user:
client-certificate: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/force-systemd-env-322247/client.crt
client-key: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/force-systemd-env-322247/client.key
- name: kubernetes-upgrade-419091
user:
client-certificate: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/kubernetes-upgrade-419091/client.crt
client-key: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/kubernetes-upgrade-419091/client.key
- name: pause-768463
user:
client-certificate: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/pause-768463/client.crt
client-key: /home/jenkins/minikube-integration/17297-5744/.minikube/profiles/pause-768463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-269116

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-269116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-269116"

                                                
                                                
----------------------- debugLogs end: cilium-269116 [took: 3.054797653s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-269116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-269116
--- SKIP: TestNetworkPlugins/group/cilium (3.21s)

                                                
                                    
Copied to clipboard